Yesterday around lunchtime, I took a break from reading academic books about eighteenth and nineteenth-century culture at one of my favorite places in New York, the New York Public Library, so that I could attend the March against Monsanto that was about to begin in Bryant Park, the lovely and popular little public park behind the library. This march was actually the second of such protests that took place all over America and across the world. The first was May 25. If you want to see some photos and YouTube clips of this worldwide protest, click [here]. I attended the march for a few reasons, one being simply that it was near where I already was, another being that I support most of its goals, and last but not least, because it closely relates to the book that all the first-year students at my college were asked to read over the summer, Raj Patel’s Stuffed and Starved: The Hidden Battle for the World Food System — a book that most of my students told me they found a little bit difficult and convoluted and therefore a lot boring. In my view, it’s an important and interesting book, so I’m hoping here to make that clear. The main idea of both the march and this book is that our food system is being monopolized by corporate interests in ways that are unhealthy both for the consumer and for the producer. Examples of this problem are the obesity epidemic as well as the high rates of suicide among small farmers struggling to maintain their farms in countries such as India . The march focused on the issue of genetically modified organisms (GMOs) that Monsanto creates and actively lobbies governments to promote in their country’s agriculture.
Inexplicably, this world movement has not been covered by the New York Times. It is hard to imagine why the Times doesn’t cover it, since it seems to me to be more interesting and more relevant to the lives of people than the article about the spending habits of a Catholic bishop in Germany or the article about the dentist whose clients sometimes pay her in works of art. In my opinion, something that takes place in more than 500 cities around the world at exactly the same time deserves at least a mention. We could speculate that American journalists are so focused on the supposed conflict between the Democrats and Republicans (e.g., the government shut down) that it doesn’t occur to them to notice that most Americans have political viewpoints and ideas that are neither Democrat nor Republican.
One might raise the question of whether this march was in fact a failure, since the point of such marches is precisely to make the public aware of important issues by organizing an event that would attract media attention. So, since this event did not attract media attention, was it a failure and, if so, why? We might shift the blame to the newspapers themselves and accuse them of not wanting to upset the corporations that advertise in them, and I would agree it is important for the reading public to be critically aware of this potentiality. Since the march was covered by the alternative media, such as the newly formed Al Jazeera America, this may be a reasonable suspicion, though difficult to prove. Or maybe Americans are so focused on the Tea Party opposition to President Obama that they fail to notice the opposition to Obama from the other side of the political spectrum, the so-called “left.” Or maybe the journalists mistakenly thought the march was part of the Comic Convention, since both featured people dressed up in costumes, hahaha. However, in this case, I also wonder about the self-presentation of the march itself. As I listened to the speeches, the march seemed to bring together a diverse array of concerns, including, healthier food in public school cafeterias, the right of us consumers to know what we are eating, the long history of Monsanto’s dangerous and illegal business practices, and even a more spiritually fulfilling relationship to our food. The one thing uniting these diverse agendas was simply the evil of Monsanto which was something of a synecdoche for the world’s problems.
In a sense, the rather long list of various interests and feelings as well as the hatred of Monsanto somewhat obscured the two important issues that are actually before our government right now. The first issue is one that has received very little media attention even though it may revolutionize the world economy — something called the Trans-Pacific Partnership (or TPP) that has been under negotiation among countries from Japan to Chile since 2008. Proponents of the TPP argue that it would boost economic growth by encouraging trade, but critics argue that it would give power to large corporations and undermine any government’s ability to protect its labor force, the environment, and the health and safety of its food supply. Considering that President Obama has been both actively promoting the TPP and keeping the details of the agreement a secret, this could be one of those strange issues about which both the right-wing Tea Party and the left-wing Green Party and socialist parties could actually find common cause. Obama was hoping to fast-track this bill through congress this month and thus avoid any substantive public debate (a hope that may have been derailed by the government shut-down, I don’t know.) My guess is that the reason why the planners of the march long ago planned for mid-October was precisely to bring attention to the issue that they predicted would be rammed through congress (little suspecting how dysfunctional congress would be.) The second is a more local affair, the bill currently before the New York state legislature requiring all GMO food to be labelled for consumers.
My own observation, just listening to the speeches, looking at the signs, and also noticing how students responded to Raj Patel’s book is that the emotional energy and rhetoric revolved around the rights of the consumer and some vague notion of authentic and pure food. In other words, the vague feeling is that GMO food is bad because it is not natural. Some speeches argued that we have a “right to know” what is in our food, thus calling attention to the fact that few of us actually have a clue what we are eating most of the time (despite labeling and the efforts of the Food and Drug Administration.) The problem of this sense of “real food” versus GMO food is that a lot of food that is genetically manipulated is not bad for us. Farmers have for centuries cross-bread plants and live-stock. Thus the problem is not simply GMO; rather it’s the unsafe and aggressive manner in which Monsanto forces small farmers to use its products.
Don’t get me wrong here. As someone who just taught Upton Sinclair’s famous novel The Jungle, published in 1904, that inspired President Teddy Roosevelt to pass the Food and Drug Act in 1906, I certainly care about the role of the FDA and support the regulation of our food supply to ensure that it is healthy and safe. However, Sinclair’s novel was also about the plight of immigrants in Chicago at the end of the nineteenth century and about the exploitation of labor and the monopolization of food production by corporations. It is a somewhat well-known irony among teachers of literature that Sinclair’s intention was so totally misread. In other words, what people noticed in his novel were the long descriptions of the meat-processing factories which were quite gross, and not the long descriptions of the oppression of workers. The book hence inspired the government to regulate the processing of meat to make it safe for consumers, but it did not (as Sinclair actually hoped it would) inspire the government to protect workers. As Sinclair himself joked, “I aimed for the public’s heart, and by accident hit its stomach.”
I suspect the same thing is happening now that happened with The Jungle. The economy and the long-term effects of trade policy such as TPP are hard to understand. Likewise, the argument of Raj Patel’s book is complex in its drawing a connection between obesity in the United States, starvation in India, and migration from Mexico. Ultimately, Patel’s argument is about the political power of multinational corporations that undermines the ability of farmers to make smart decisions and the ability of local communities to do what they think is in their best interests — and that this affects all of us in various ways. However, what many students take away from this book, and what many of the protestors yesterday were focusing on, was some vague, nostalgic attachment to “real” food and some vague idea that we consumers should be able to get “real” food.
The law before the New York legislature right now is precisely the sort of law that focuses on the consumer — the supposed right to know what we are eating. At the rally, the proponents of the law argued that once we have GMO labels on our food, then the public will realize what they are eating and begin to buy non-GMO food, and this would so hurt Monsanto’s profit margin that… hmm… honestly, it wasn’t really clear to me what would be the outcome. I can’t imagine that Monsanto and the global food industry would be hurt so much that they’d change their business model. As the journalist Naomi Klein observed in her famous book, No Logo, such are the limits of political activism that focuses on the rights of the consumer rather than the means of production. Such also are the limits of political activism that focuses so intently on the evils of a single corporation that symbolically represents all that is wrong with the world rather than the trade policy that allows many such corporations to thrive. From the perspective of a literature professor such as myself, both the March against Monsanto and the bill against GMO food have a narrative that is full of symbols and what psychoanalysis calls “displacements” whereby complex political content is reduced to simpler emotional content.
Might the march have been more successful if it focused on the actual issue — either the worldwide concerns about the FPP or the local legislation against GMO, or (since they are related and timely) both?
Note: all the photographs in this post were taken by me, but I deliberately selected certain photos and cropped them so that there would be no faces. My intent is to protect individuals who might not want their face on the internet without their permission (especially considering the politically controversial stakes of the march.) An unintended consequence may be that readers of this blog will get the wrong impression that the march was a bunch of people in funny costumes, but actually, for the most part, it was a large crowd of ordinary people of diverse backgrounds, ethnicities, and ages.
Almost every week since 2011, American news corporations have reported on the non-violent grassroots democratic movements in Egypt and Tunisia and the violent, U.S.-supported movements in Syria and Libya — the so-called “Arab Spring.” However, almost never reported are the conditions for a viable democracy in Ethiopia, and even in those few reports about Ethiopia such as this one, what remains missing is any account of the religious, ethnic, and ideological complexities of that country and the changing multifaceted history of that region. In other words, what remains missing is precisely the information one might need to really understand what is happening. How do we understand human rights and democracy? I’d like to begin with this photography here taken on Thursday, August 8th that quickly circulated on various forms of social media and eventually was posted on Al Jazeera last night along with some earlier photographs and Twitter feeds.
The picture is of a young man in the capital city of Addis Ababa, confronting Ethiopian police non-violently by kneeling in prayer before them. Some conversation began on Facebook and Twitter about the symbolic meaning of the photo, and what I’d like to suggest to the readers of my blog is that, for many Americans, the way “democracy” in other countries is understood is largely through images such as this one. It is worth thinking about such images because they often take on a symbolic significance that may be emotionally moving but also may obscure many of the political details and actual functioning of democratic social movements.
But before I continue to think about my questions about how we understand the images that come to symbolize democratic ideals and social movements, I should provide some context for the photograph. Last week, as the month of fasting for Ramadan came to a close and the feast-day of Eid al Fitre was celebrated across the world, Muslims in Ethiopia were protesting the government’s closing of some mosques and arrest of Muslim community organizers and journalists. The Ethiopian government’s heavy-handed responses to those protests in various towns across the country and in the capital city of Addis Ababa left many dead and more injured. The government’s position is that these are violent Muslim extremists, but against this view, the Muslim community organizers argue that they represent the moderate form of Islam that has existed in Ethiopia for over a thousand years and that their movement that started in 2011 is non-violent. On Thursday, August 8th, in support of the Muslim protesters, Amnesty International filed this complaint against the Ethiopian government for human rights violations. Muslims make up about one third of the population of Ethiopia, but the state government has been dominated by Orthodox Christians since the incorporation of Muslim territory at the end of the nineteenth century. The entire history is a long one, and considering that the protest movement started about two years ago, I don’t want to dwell on all the details in this blog post; you can read or hear more about the past week’s conflict by following these links to OPride, BBC Africa, Reuters, and a United Nations brief. One frustrating thing is that the place where you won’t hear anything about these events is on the major sources of information in the United States: The New York Times and National Public Radio.
Coincidentally, exactly when this conflict started in the Oromia region of Ethiopia, I was listening to Oromo intellectuals at the Oromo Studies Association conference at Howard University in Washington DC who were engaging in a debate about the complex historical relationship between religious organizations (namely Islam and protestant Christianity), cultural self-determination, and democratic movements. One of my students and I were at that conference to give presentations on a panel about international education, media and film along with OPride‘s editor and the Oromo-language journalist for Voice of America.
So, drawing on what I learned at that conference and what I had already learned before going to it, we can deepen the context for this single photo to go so far as to suggest a context of a thousand year history of political involvement from Turkey, Portugal, England, France, Italy, the United States, and most recently Saudi Arabia, China, and India. The cultural divisions in Ethiopia are not merely religious but also ethnic, and this is complicated because the largest ethnic group in Ethiopia, the Oromo, are a mix of Christian, Islam, and older forms of religious practice. Earlier this year, on June 25, Al Jazeera became the first global television news network to focus on these issues in a segment that you can watch here. But there are other factors to consider too, not mentioned on that segment of Al Jazeera. From the 1960s to the early 1990s, both Christian and Islamic religious institutions participated with other organizations in a broad-based revolutionary democratic movements that eventually led to the revolutions in 1974 and 1991, but since the 1990s, new forms of Christianity and Islam have emerged that claim to be fundamentalist but whose funding and ideology seem to come from outside the country. We might consider too that for almost a century Ethiopian law prohibits religious practices (such as burial and marriage) that do not fall under the jurisdiction of sanctioned Christian or Muslim institutions (e.g., the Oromo’s traditional Waaqeffannaa), and these new forms of fundamentalism (not only Christian and Muslim fundamentalisms, but also western neoliberal fundamentalism) appear to be suppressing some of the older forms of ethnic culture that predate the adoption of the world religions, including older forms of ethnic culture that give women some important forms of agency in their communities (e.g., addoyyee and siiqqee.)
So, now that I’ve summarized that context, let’s return to the photo. The non-violent gesture of the man engaging in “salat” (prayer) seems to have stopped the police officers. The image might remind us of other champions of non-violent action such as Mahatma Gandhi and Martin Luther King, Jr., who argued for the effectiveness of moral persuasion through non-violent action that exposes the hypocrisy of the ruling regime whose excessive use of force undermines the legitimacy of the state. The action of this man engaging in salat is not passive, but firmly active non-violent practice. However, noticeably, other forms of non-violent protest (e.g., marches and assemblies) did not have the same effect on the police. Two things seem special about this photo: first, that it is an act of prayer and second that it is a solitary individual putting his body at risk. This does two things. First, there is a bias in western media that tends to read Islamic practice and liberal human rights in opposition to each other, and indeed, the Ethiopian government’s rhetoric to the outside world seems to deliberately capitalize on that bias in order to discredit their political opponents. But for Muslim Oromos living in the United States, Australia, and elsewhere, the meaning of this photo would seem to suggest that liberal human rights and Islamic practice can function together. Second, it foregrounds the decision of an individual to put himself at risk for the greater good rather than a group identity or mobilized mob. It creates a hero.
Thinking theoretically, and reflecting on this interesting question about the structural relationship between the practices of Islam and the idea of human rights, might all of this illustrate the anthropologist Arjun Appadurai’s inquiry into the nature of globalization? In his book Modernity at Large, he argues that various ethnoscapes, technoscapes, mediascapes, financescapes, and ideoscapes all play a role in social formations and local cultures — sometimes functioning together, but sometimes functioning in contradiction to each other. These global “scapes” are in tense dialectic with the local (i.e., the actual lived experience and social organization of communities.) My presentation at the Oromo Studies Association conference alluded to Appadurai’s theory to argue that today’s international education is very much enmeshed in these different “scapes.” In the case of the photo that is the subject of this blog, we see the ethnic identity of Oromos, the practice of Islam, the ideology of human rights, and the technologies of social media. The photo might seem to fuse these various “scapes” into a singular image that celebrates a global sense of local freedom.
However, what we do not see in this symbolic image, of course, is the economics, and this includes the distribution of wealth and Ethiopia’s GDP that Jawar Mohammed emphasizes in the interview with Al Jazeera, but also the daily labor of individuals that Dr. Ezekiel Gebissa talks about in his book on coffee and khat production, as well as the speculative labor of financial institutions (what Appadurai calls financescapes), and even more basically the home-making of families. What do we make of this absence? Might it be important for how we read the effectiveness of symbolic images that come to represent such ideologically loaded concepts of freedom and democracy for American consumers of media?
We might compare this image to another one, the famous Tiananmen Square demonstration in 1989 when a single individual stopped military tanks from interrupting a public protest.
In fact, Oromos on social media (e.g., here) have explicitly made the comparison between the recent event in Addis Ababa in 2013 and that event in Beijing in 1989, and it is precisely the making of such comparisons between different movements that is the point of my blog post today, because in the media these images can become filtered through a western ideology of human rights that may not be fully attentive to some of the local cultural practices and understandings of what was happening. For instance, the American and European media all understood the Tiananmen Square demonstration to be a pro-democratic and anti-communist demonstration. What the media failed to appreciate is how communism and democracy are not inherently antithetical, and that one could protest the government for other reasons. In an important book written by one of the leaders of the Tiananmen demonstration, Wang Hui, and published by Harvard University Press in 2006, entitled China’s New Order, it is revealed just how incorrectly the western media understood this event when they filtered it through the global ideoscape of human rights and democracy. Wang Hui outlines the variety of economic and social issues that concerned the Chinese people and the demonstrators, and how all these issues did not neatly fit under a single ideological perspective. Importantly, for many of the demonstrators, instead of protesting communism, what they were actually protesting was the capitalist reforms, opening relations to American and European capital markets, and the “financescapes” being dictated by the government that were causing some forms of economic displacement of peoples (e.g., working conditions) and general uncertainty. In other words, in a sense, the movement was actually in some ways a conservative one, exactly the opposite of what the western media assumed.
So, what lessons do we learn from Arjun Appadurai and Wang Hui’s inquiries into the nature of democratic practice in a globalized world order? What further questions might we raise about this photograph of a man kneeling in prayer before police in riot gear? How might we untangle the tangled relationship between the Islamic practice of salat, the local demands of various religious and ethnic institutions, and the international ideology of human rights and non-violent political practice that the photograph seems to symbolically fuse?
One of Appadurai’s points about using the terms “ethnoscape” and “ideoscape” instead of the more ordinary terms “ethnic group” and “ideology” is that the neologistic “scape” alerts us to the ways that the meaning of ideas changes depending on the contexts. For instance, African American civil rights activists in the 1960s, the U.S. government in the 1980s, and leaders of the democracy movement in Tunisia today might all use the same ideas of freedom, democracy, and human rights but mean slightly different things by them. Gandhi’s practice of non-violence is connected to a Hindu tradition whereas Martin Luther King, Jr.’s is to a Christian one. Scholars of the civil rights movement in America have long expressed frustration about the way Martin Luther King, Jr.’s political message has been watered down in the popular media and high school history textbooks and grafted onto the ideology of American patriotism. Likewise, the Ethiopian government’s branding opposition groups as “terrorists” appropriates the inflammatory rhetoric of U.S. president George W. Bush a decade ago, but does so for its own ends, and when Oromo’s speak of genocide and ethnic cleansing, they are using legal terms formulated by the United Nations in the context of the Jewish Holocaust in ways that may or may not be slightly different from the way a UN legal team might use them. Hence, we are dealing not with ideologies, but with ideoscapes whose very signifying power is supposed to be part of a universal language that everyone in the world can understand but is actually quite local and context specific. Similarly, just as ideas are not pure and stable concepts, ethnicity is not a pure identity based merely on territory or authentic culture, because the lived experience of ethnicity and cultural practices have a dialectical relation to the global transformations and movements of peoples due to financial speculation, colonialism, etc. For instance, a little over a century ago, the Oromo were a rather diffuse ethnicity of many tribes, kingdoms, religious practices, and dialects who were forced to unify as a singular political liberation movement only after their rights and their land were threatened by a newly formed Ethiopian imperial state and global capitalism. Notably, an ethnic group’s right to self-determination is usually argued with terminology borrowed the European enlightenment’s discourse on “rights” but applied to local cultures who may have a different language for talking about such things. During the conference, one Oromo feminist community organizer said she preferred to think of women’s empowerment in terms of “social balance” and traditional Oromo culture rather than in terms of “rights” and western ideas. Hence, the lived experience of “ethnicity” changes depending on context and also depending on the “ethnoscapes” relation to other “scapes.”
And so, in the case of this photo, we might need to think harder about what human rights and non-violent protest really mean in the context of Islamic practices within Ethiopia that are themselves undergoing a transformation due to various global forces such as the competing ideoscapes of religious fundamentalism and liberalism and also such as the ways in which finance capital transforms territory, the use of land, and a community’s access to natural resources such as water.
It happens almost like clockwork at the end of the spring semester and beginning of the fall: the New York Times publishes another blog post lamenting something about college English education, usually by someone who only peripherally knows what they are talking about. One of my all time favorites was Stanley Fish complaining about what was wrong with freshman composition classes based on offhand comments he overheard in the hall, and most recently Verlyn Klinkenborg sheds tears at the so-called “Decline and Fall of the English Major.” Since these are opinion pieces, they aren’t required to cite any actual data, but since they parade the veneer of insider expertise, their bitter commentary becomes somewhat dangerous as their misinformation is repeated and magnified so much as to almost seem like actual fact. Fortunately, professor-by-day, superhero-by-night Michael Bérubé is on the scene to correct this misinformation by citing actual numerical data in a recent op-ed to the Chronicle of Higher Education, “The Humanities, Declining? Not According to the Numbers.” Unlike Klinkenborg, who only teaches a few classes here and there as a “writer in residence” to various colleges, what might be analogous to having a guest-worker visa, former president of the MLA Michael Bérubé has access to real data. (Actually, all of us have access to this data; it’s just that journalists aren’t always motivated enough to go look at it.)
I pretty much agree with everything Bérubé says here, and love his article, but there are some important things missing — things important enough that I believe attending to them will change the conversation entirely, as you may have guessed from the title of my post. The upshot of Bérubé’s piece is that actually the number of English majors has neither increased nor decreased significantly since the 1970s. He notices that usually the lamentations carry with them an attack on theoretically rigorous scholarship and express some nostalgia for the olden days when we all knew which lines of poetry to quote at cocktail parties. He suggests that the real issue is neither the usefulness of the English major for the job market nor the quality of instruction (all of which are doing just fine); rather, the real issue is funding for education, the casualization of the labor force by hiring more adjunct instructors, etc. And of course, the constant specious attacks on our profession by the media and politicians aren’t so nice either.
So, what do I have to add to this conversation? Four things.
Thing one is is the rise of new interdisciplinary programs. There was a useful study done by the MLA published way back in 2003 about the declining number of English majors. Anyone can see it, though apparently reading something more than a page long is too much work for NY Times bloggers. The strength of this study is that it surveys all colleges and universities (not just a few, as journalists tend to do), and it shows the trends for each and every year (not just the years that support some sort of dramatic conclusion that the journalists prefer to cite.) I’m guessing Bérubé got his information from this, but what he doesn’t mention is some of the findings. There are many, but one of them that I want to draw attention to is the observation of what students started majoring in instead of English. The assumption by many is that they have shifted to business, but although the business major has seen some increase, even more significant was the emergence of entirely new interdisciplinary programs: environmental studies, peace studies, gender studies, ethnic studies, film and media, and most importantly, communications. These programs are never mentioned in the journalistic lamentations, and they are important, because people who once upon a time might have become humanities majors are now opting for these programs, and these are programs that often combine social sciences, humanities, and sometimes even a little technology or hard science. Historically, it’s also not surprising that the creation of these programs happens at about the same time as the number of English majors declines. Moreover, they affect different schools differently, depending on the size of the school. In my view, these are all great programs, and what’s more, they are programs that the English department is often involved in and supports, or even, in some cases, leads. And this is one reason why I title this blog post “The Rise and Change of the English Major,” because often it is the literature professors who have taken leadership roles in creating these new and innovative programs that then later affect the constitution of the English major itself. One of the enduring challenges for English department faculty is how to maintain the traditional major, with all the timeless classics and literary history, at the same time it includes these new programs. It is not uncommon to find professors who have dual appointments in English and something else. In my view, English faculty need to be engaged and take leadership roles in interdisciplinary programs, but they also need to be clear about the expertise they bring.
There’s a lot more to say about this, but for the sake of keeping this blog post short, I will move on to thing two, and thing two is the fundamental importance of communications skills, analytical skills, and critical thinking skills for employers today. These skills were identified in a report made by the National Association of Colleges and Employers about what employers wanted to see in college graduates. Moreover, corporations have been very clear that the kinds of things taught in business departments do not foster much critical thinking and writing which is why a broad-based liberal arts program is important (see [here] and [here], for instance.) Point being, the English major has become even more essential to colleges and universities than ever, and this is in part because of the interdisciplinary nature of the English department that I mentioned in the preceding paragraph. For instance, some schools even require their majors in other subjects to take business writing, tech writing, or something along those lines taught in the English department. Although the traditionalist may lament the fact that students aren’t walking around quoting Shakespeare and Keats on a regular basis (did they ever?), corporations may be happy to have a job candidate who has had the experience of working through the complexity of a poem because this sort of exercise carries with it a lot of transferable skills such as careful reading and original thought. And the inherent use value of English classes is another reason why I mention the rise and change of the English major.
The recent and often cited Report by the Commission on the Humanities and Social Sciences clearly states the value of a liberal arts curriculum to employers for precisely the reasons I just stated. English majors have the skills that employers want. The report also mentions other important things such as cultural and civic awareness, which are doubly important if one considers the rapid increase in jobs that have to do with civic engagement, social responsibility, and intercultural issues. What is curious is that when NY Times pundits such as Klinkenborg cite this report, they actually say the opposite of what the report says. Klinkenborg says that the humanities faces declining enrollment because students think they can’t get a job with an English major. Actually, the report asserts the value of the humanities for employers, but notes the problem of funding and support for programs. In other words, it’s a political issue. Duh.
Related to thing two is thing three, and thing three is the growth of writing centers. For the skills identified by employers, writing centers have gained a prominent role in the college. Often they have close ties to career centers, since both help student prepare their job application materials, and often they have close ties to English departments. The relationship to English department differs depending on the school, ranging from being directly run by the department to simply having a lot of English majors on staff as tutors. The history of writing centers is long and complex, and recently I’ve done a little reading about them including such books as Neal Lerner’s The Idea of the Writing Laboratory and Michael Pemberton and Joyce Kinkead’s The Center Will Hold: Critical Perspectives on Writing Center Scholarship, so I am reluctant to give a simplified history, but one of the upshots is that the role of writing centers grew considerably after the 1970s, and this was in part because a great number of Americans were now attending college, and more importantly a greater number of these college students spoke English as their second language or were first-generation college students. What this means is two things. First, far from the “decline” that the journalists moan about, English departments are actually more important, because they play a role in supporting the whole school. And second, the English department has changed a bit because scholarship on the teaching of writing has developed considerably since the creation of various journals on writing and writing centers, and much of this growth has tended away from the poetic and toward cultural analysis.
Thing four is the new emphasis in schools on “global citizenship” and leadership, cultural sensitivities, etc., and noticeably, these are all skills that employers value, too. They are also skills that English departments excel at. Afterall, English is where postcolonial studies was invented, way back in the 1970s, to better address the new global situation of newly independent African, Asian, and Caribbean countries. Before college administrations became aware of the “global,” literature departments were already there.
So, in conclusion, my argument is that we have not seen a decline in English, but rather an expansion and a change. Good changes in my opinion, though growing pains are always par for the course. The challenge is how to make our case to administrations and the general public who don’t always seem to understand the importance of English departments or even understand what it is that we do.
Why is this? Why are English departments the discipline that the media loves to cry about, and why are English departments uniquely misunderstood? Now I move from the practical and the factual into the realm of theory. What’s also interesting to me is the ways English professors are represented in Hollywood cinema, either as Shakespeare quoting, bow-tie wearing, obsessive, anti-social freaks or as lazy, lecherous alcoholics who sleep with their students. I’m not saying such colorful characters do not exist at all in real life, but they are the exception, not the rule. One possible explanation for all this misunderstanding and media hype is that unlike other disciplines, everyone thinks they know something about our discipline. I remember going to the doctor because I was sick and finding myself listening to the doctor through a haze of fever and congestion tell me about his favorite books and his view of literature. I was waiting for him to talk to me about my health, but he never did. I can’t imagine the opposite case of me lecturing the doctor about epidemiology if he came to my office to ask my advice about books. The fact is, most people don’t continue to study algebra, chemistry, and sociology after college, but they do continue to read books and even have strong feelings about them. Hence, history and English professors are often in the awkward position of talking to someone who thinks they know as much as we do about what we do, a position rarely experienced by the chemical engineer. This difference creates a psychological tension. Possibly the aspects of pure fantasy and irrational fear that we sometimes notice in the rhetoric about English departments that we find in the mainstream media or in the speeches of politicians is an effect of the uncanny difference.
Note too the clear contradictory nature of the lamentations about English. The same individual might complain first that English departments need to return to the classics by dead white males and stop teaching all this new-fangled theory and politically correct stuff, and then proceed to complain that English departments need to become more relevant to the “real world” (i.e., jobs and whatnot.) That these two desires contradict each other often goes unnoticed. That the English department has for a long time actually been doing both of those things — both the classics and the real world stuff, and continues to do both those things — also goes unnoticed. Sigh.
Still another disconnect is the strange notion that because English professors study metaphor and rhetoric then they must somehow be silly lovers of fanciful idioms rather than practical realists. To my way of thinking, it seems obvious that someone good at analyzing the use of metaphors and symbols would be expert at cutting through bullshit and seeing the facts for what they are. For instance, Bérubé’s article is a perfect example of such skills (as is, I hope, my own blog) in which he cites actual statistics and wonders why journalists keep repeating factually unsupported narratives. An English major would likewise quickly see through the rhetoric of those NY Times celebrity bloggers who seem to follow a rhetorical formula — the author relaying some cute anecdote which is supposed to make them sound like they know what they are talking about and then coming to all sorts of unsupported conclusions. Columns by the NY Times superstar pundit Thomas Friedman are typical in this regard. Reading Friedman talk about the economy is like reading someone who tells about a nice time they had rowing a boat on the pond and then launching into opinions about the chemical composition of various plants he saw there as if the one experience gave him the expertise for the analysis. There is a formal consistency to these op-ed pieces that is rather amusing and isn’t too hard to analyze.
However, I wonder if the fact of the growing importance and expansion of English for employers and colleges might be, paradoxically, the reason why they receive the sort of critical attention in the media. English departments are monstrous and scary — freakishly adaptable — the skills they teach lend themselves to almost every other discipline, since all disciplines require some sort of critical thinking and culturally situated communication. We are monstrous, and that is our strength.
Three weeks ago, on April 24th, a textile factory in Bangladesh collapsed killing 1,127 workers, mostly young women. The horrible event spurred international outrage and the arrest of the factory’s owner and building engineers. Calls for action, however, were followed by speculations on the effects any such action might have on Bangladesh’s struggling economy, not to mention the global business of retail clothing. People around the world wondered what would come of this tragedy. Finally, action seems to have been taken. In today’s New York Times, it was reported that major multinational corporation have signed on to a new “safety plan” and that the government of Bangladesh would revise its labor laws to make it easier for workers to unionize. Today’s news may sound like good news, and daily readers of the newspaper who have been distressed over this for the past two weeks may breathe a sigh of relief and assume the problem has been solved.
But it hasn’t. The New York Times has left out quite a bit of information and common sense. Whether this is because a culture of lazy ignorance and stupidity saturates the profession of journalism or because the Times tends to favor the interests of Wall Street, I don’t know. Maybe both. Maybe neither. In any case, what the Times leaves out is precisely the information that we need if we are to assess whether the new safety plan and national reforms will work. There is no excuse for neglecting this information since global activists such as the Institute for Global Labor and Human Rights and the United Students Against Sweatshops have been making a case for global legal reform not just for Bangladesh, but worldwide, since the mid-1990s. In response to activists, the multinational corporations such as Nike, Disney, Wal-Mart, and others have fought tooth and nail against labor and safety standards, though you wouldn’t know that from reading the Times. Here is the first part of a documentary entitled The Hidden Face of Globalization about the garment industry in Bangladesh produced by those activists:
The early years of this conflict was reported by Naomi Klein in her influential book, No Logo, published way back in 2000, which quickly became standard reading for global labor activism around the world. Back in 2000, there seemed to be a ray of hope for college students who were promoting something called “designated suppliers” which would enable colleges and other organizations to select only factories that abide by labor, environmental, safety, and human rights standards to produce the clothing and other items with their college logo on them. In addition to many universities, many Catholic churches also took a leading role in this movement for social justic.
However, the designated supplier programs are difficult because of something called “outsourcing” whereby the retailer outsources the management of production to another company who in turn outsources the actual production to yet another local company. Major retailers claim such designater supplier programs violate anti-trust law on the grounds that it controls supply chains, and hence the big corporate lawyers began to sue the nonprofit educational and religious organizations that attempted them, which of course is ironic considering that Wal-Mart’s entire model of success is based on its very aggressive control of supply chains. The hypocrisy of the corporate world’s position apparently went unnoticed by the American judiciary. At the same time, when local governments in countries such as Bangladesh, Indonesia, and Guatemala, among others, try to support their labor force through the regulation of businesses, tax revenue for social programs and education programs, and labor rights standards, they are threatened with a loss of capitalization and financing by the International Monetary Fund and Wall Street brokers.
Much of the debate over globalization has been about how regulation happens — including the question of what laws are made and how they are enforced. So, what is sinister about the new “safety plan” proposed by multinational corporations is that they locate the problem in Bangladesh and pretend that they will now take responsibility for the problem. Related to this political issue, scholars of literature and media such as myself have long noticed how expressions of sympathy over the poor in the so-called “third world” countries actually exacerbates the problem, because these shallow tears assume a paternalistic pity for others rather than an honest assessment of our own complicity. In other words, the cultural representation of sympathy supports the political plan of condescending paternalism that was naively reported by the Times today. We can imagine the Hollywood version of this movie – all tears and pity for the poor foreign country, but no substance and no awareness of the complex economic reality. After all, if we instead take a more global and politically responsible view, then the corporations would have to admit that it was their own practices and belligerent political pressure that created the problem, not just in Bangladesh, but worldwide.
And more to the point, who really believes that corporations will honestly regulate themselves?
Instead, what might be prefered is that the Bangladeshi government be in charge of such regulation, rather than the corporations, even if we worry over government corruption. This preference for the role of national government is part of the tradition of the United States and its Constitution. Such a preference should remind us of the success the labor movement in the United States had after a very similar event on the Lower East Side of New York City a century ago. The infamous Triangle Shirtwaist Fire in 1911 led to the tragic death of 146 workers, mostly young immigrant women. It galvanized labor organizers and caused a shift in public thinking that eventually led to many of the labor standards Americans enjoy today: the right to unionize, minimum wage, overtime pay, and many other laws protecting the rights and safety of the worker. It is the responsibility of state and national government to ensure that the law is followed, that buildings are built to code, and that workers are not exploited. It might seem that Bangladesh is poised to follow this model, and indeed, the event has led to its government revising labor law so as to be less hostile to workers.
However, two things are significantly different. First, in this case, it is not the national government that is enforcing the new so-called safety plan, but rather a loose agreement among multinational corporations to do better in Bangladesh. Is that a viable model? And why not do better everywhere? Second, the question of why not do better everywhere, rather than in just the one country, leads to the troubling economic reality of globalization and the question of global governance. As the sociologist Saskia Sassen in her book Globalization and Its Discontents (1999) and the Nobel-prize winning economist Joseph Stiglitz in his book also titled Globalization and Its Discontents (2002) have both observed, whenever one country starts to improve its labor and environmental standards, then the system of outsourcing and supply chaining simply moves its factories somewhere else. Hence, there is a legitimate fear within Bangladesh that implementing the rule of law in their country could result in multinational corporations moving their business elsewhere and financial institutions punishing the Bangladeshi government by pulling out their investment. The old nation-based model of progress that was so successful in the United States in the 1930s after the Traingle Shirtwaist Fire is a model that struggles today in the face of globalization.
There is a funny essay by the British novelist G. K. Chesterton entitled “Cheese” in his book Alarms and Discursions in which he humorously imagines writing a five-volume scholarly treatise entitled “The Neglect of Cheese in European Literature” because “poets have been mysteriously silent on the subject of cheese” even though “cheese is the very soul of song.” One would probably never say that poop is the very soul of song — perhaps it is the very opposite, the material remainder of our fleeting mundane existence — but for several years I have been speculating about what it might be like to write a literary history of poop. To my knowledge, it has never been done, and far more than cheese (which has actually been written about extensively), poets and philosophers tend to avoid talking about their most basic daily function. I have not yet followed through on this project, but today, thanks to George Takei on FaceBook, I saw this hilarious comic of the Zen Kitties, meditating on their kitty litter box, and I was inspired to begin.
The image is reminiscent of the famous Zen rock gardens of Japan, the most famous of which is at the Ryoan-ji Temple in Kyoto. The joke observes the resemblence between these philosophical gardens and kitty litter boxes, and then speculates philosophically about the poop as a metaphor for the impermanence of our own existence, a well-known idea in Zen Buddhism. However, it also seems to enact the basic drama of poop — that we wish it (and all the uncomfortable detritus of our lives) would simply disappear, but actually it doesn’t. The false consciousness of this ideology is discussed by the world’s favorite Slovenian philosopher Slavoj Zizek in several of his books. In the movie Examined Life, which features nine influential contemporary philosophers speaking about the world while paripatetically walking around somewhere in that world, Zizek begins his presentation, significantly, at a dump. By doing so, he is suggesting that philosophy, if it is to be honest and ethical, should begin with our excrement and our trash.
Precisely the things we least want to talk about in polite society is what we must talk about if we are to address the most important problems of our time and if we are to understand ourselves. It is telling that we have constructed such elaborate architecture and political infrastructure for quickly removing our poop as far away from ourselves as possible so that we are able to go about our daily lives ignoring it as best we can. The Zen Kitties speculating on the total erasure of their poop actually mirrors, in an odd way, the way we humans behave towards our poop.
In no way do I want to make the argument that these Zen Kitties have anything to do with actual Zen philosophy and practice, which is very rigorous and tough. But it does have something to do with the popularized, somewhat self-indulgent version of Zen in America that can be found in books such as Zen and the Art of Motorcycle Maintenance by Robert Pirsig. (And obviously, the title of my blog post is a play on Pirsig’s book.)
The popularized mystified version of Western Buddhism and new-age spirituality is also something Zizek has critiqued in various places in his writings, including his article on the new Star Wars movies and his essay “The Prospect of Radical Politics Today.” He jokes that the Western Zen ethos is the perfect articulation for the neoliberal ideology of “late capitalism” and that if Max Weber were alive today, he would have written a sequel to The Protestant Ethic and the Spirit of Capitalism (written in 1904-1905) that might better address our twenty-first century world, and this sequel would be thusly titled “The Zen Ethic and the Spirit of Global Capitalism.” Zizek attacks this ideology which he sees as unethical false consciousness: “Western Buddhism is such a fetish: it enables you fully to participate in the frantic pace of the capitalist game, while sustaining the perception that you are not really in it, that you are well aware how worthless this spectacle really is–what really matters to you is the peace of the inner self to which you know you can always withdraw.” This Zen Ethic (by which Zizek means the popularized Zen in Western culture, not actual Zen) pretends to be beyond politics precisely at moments when its practictioner is most enmeshed in a political world. Ironically, the typical mode of withdrawal today is not Zen’s spiritual withdrawal into an ethical selflessness, but hipster irony and an endless play of cultural referentiality.
What I love about the Zen Kitties is their meditation on one of the most profoundly difficult subjects of existence. The philosophical conclusion they draw from the cleaned kitty litter box is the impermanence of life. The more obvious question that they don’t ask, and that Zizek thinks we need to ask, is where did the poop go. However, even though the comic doesn’t ask Zizek’s question, the huge eyes of one of the kitties registers a surprise and an anxiety about the disappeared poop that is the comic counterpoint to the closed, meditative eyes of the other kitty. Both of these responses are two sides of the same condition — not our human condition, but a condition that is both animal and technological at the same time. The Zen Kitties’ imagination of the philosophical meaning of a pristine and stainless litter box, in a bizarre way it seems to me, mirrors our own twenty-first century global culture’s desire for a smooth and seemless world of production and consumption without consequences and without pollution, and it provokes laughter at the strangeness of our own impossible desire. The counterpoint to this desire can be found in one of my favorite children’s books, Everyone Poops by the Japanese author Taro Gomi, that beautifully explores both the naturalness as well as the humorous variety of pooping. It can also be found in one of my favorite essays on Japanese culture, Junichiro Tanizaki’s In Praise of Shadows, which impishly delights in contrasting the poetic, meditative shadowy qualities of Japanese wooden toilets to the obsessively clean and white, antiseptic European toilets.
Anyways, so begins my critical inquiry into the literary history of poop.
A few years ago, I wrote a post in this blog about how the James Bond movie Quantum of Solace was symptomatic of globalization, and I later expanded that post into a scholarly article entitled “The New James Bond and Globalization Theory, Inside and Out,” for the journal CineAction that was published in the fall of 2009. The text has been put on the internet without my permission by the Free Library [here]. In it, I discussed many of the theorists of twenty-first century globalization who have argued that the old international order of nation states has been superseded by a new global order in which nation states are merely part of a larger network of transnational and local relations that include multinational corporations, finance capital, criminal organizations, non-governmental organizations, social and environmental movements, etc. Whether or not that is actually true, it is a way of thinking about the world that, I argue, is reflected in recent cinema. In my view, Bond was not unique, but rather typical of this paradigm shift within the movie industry in general and spy thrillers in particular, and I later blogged about the movies The International and Duplicity to expand my argument. So, when the 23rd Bond film, Skyfall, was released this year, on the fiftieth anniversary of the first Bond movie, I had to see it. And considering that this is the most profitable Bond film of all time, scoring huge at the box office, I was very curious whether the new movie would confirm my theory about Bond films, and several of my friends and colleagues asked me whether I thought so.
In some ways yes, in some ways, no.
For sure, the actor Daniel Craig continues to play the constantly brooding, angry version of Bond, instead of the pithy, urbane version of Bond performed by Sean Connery, Roger Moore, Pierce Brosnan, and others. But my point is that the new Bond style is not just Craig’s acting — it’s the whole thing, and it’s a “whole thing” that relates to the history of globalization.
The question that the characters of Skyfall ask over and over again is whether the fictional Bond character, as well as the real British intelligence service MI6, is irrelevant in our globalized, postmodern world. The movie brilliantly layers this idea, as Bond appears to die, but returns, and at various moments in the movie, Britain’s Parliament debates the relevance of MI6 and the double-O agents. In one scene, Bond and Q sit in front of a painting of a “grand old war ship inevitably being hauled off to scrap,” and Eve Moneypenny jokes about Bond being an old dog with new tricks. As the gorgeous theme song by Adele begins, “This is the end,” and later Bond jokes that he specializes in resurrection. (By the way, Bond’s resurrection is not a new theme; consider You Only Live Twice, Never Say Never Again, and GoldenEye.) Amusingly, and not so coincidentally, critics have been asking the same question that the movie itself asks. Is the Bond film a dead genre, or does it have to reinvent itself or resurrect itself to stay current and hip… and… uh… not suck. And there appears to be a general consensus that Skyfall represents something new, some critics celebrating the movie for its innovative new take, and others trashing the film for failing in the attempt. However, I have a slightly different view than the critics. For all the obsessive worry about relevance and newness, the film actually asserts a troubling and ridiculously nostalgic return to the old Bond.
But before I explain what I mean about this nostalgic return to the old Bond, rather than a further elaboration of the new Bond, let’s review how Skyfall repeats some of the stuff I mentioned in my article about Quantum of Solace. Most of the “globalist” ideas appear in a speech that the villain Silva gives when he and Bond first meet. Silva pontificates about all of Bond’s outdated attachments to the nation-state and the old order: “England… empire… MI6… you’re living in a ruin and just don’t know it yet.” (Ironically, they are having this conversation literally within a ruin that Silva himself created.) He goes on to explain how easy it is to destabilize nation states by rigging the stock market and elections. In a sense, Silva’s speech is somewhat similar to the argument I made about globalization and the withering of the nation-state in my article, but with one key and unsurprising difference. What was good about the previous Bond movie Quantum of Solace is its recognition that in the real globalized world of today, it is the U.S. and British governments who are doing all that “rigging” and often collaborating with clandestine and criminal organizations in order to do so. This was the first time in Bond history that the British government was not unequivocally on the side of good. The plot was complicated enough to map out a somewhat complex network of relations, which moved beyond the simplistic good-guys versus bad-guys story that was so typical of the older Bond movies. What’s stupid about Skyfall is the world’s geopolitical complexity is reduced to the character of Silva, whose insanity represents pure evil, and who would be a totally absurd character if it weren’t for the brilliant acting of Javier Barden. What is even more troubling is Bond’s response to Silva, that Bond represents a “resurrection.” But a resurrection of what? Silva has just trashed the British empire, and who would want to resurrect that?
In a sense, the new Bond film reduces the complexity of history to an Oedipal drama. (I’m not the only person to notice the excessively Freudian structure of the plot; for instance, see David Denby’s review in The New Yorker and another in the Atlantic.) Whereas Quantum of Solace traces the return of history in terms of American geopolitical strategies coming back to bite America in the ass, Skyfall is strictly a Freudian fantasy where the injured MI6 agent with mommy issues and a bruised adolescent ego returns to attack his former boss, who is represented as a mother figure. The film is brilliant on this point, especially when Silva shows what the cyanide capsule did to his face when he tried to kill himself in order to protect Great Britain; in that scene, he is both figuratively and literally the monster that MI6 unintentionally created. We might pose an analogy between this monstrosity and the monstrosity of so many militant groups created by the United States and Europe in other countries that backfired — Ronald Reagan’s al Qaeda being the worst. But the movie doesn’t do that. Instead we have two ghosts (or, “the last two rats,” as the movie repeatedly jokes) – the ghost of Bond returning from the dead in order to fight the ghost of Bond’s evil twin. Both of them feel wronged by MI6, and for Silva, M clearly represents the “phallic mother” figure whose love he seeks but whom he also wants to master or destroy. However, unlike Silva who returns from the dead to wage a personal war against M and MI6, Bond returns from “enjoying death” to protect M and MI6 because, he says, “we are under attack.” In this way, the movie projects international politics onto the personalities of individuals, and any geopolitical context that could have been explored or even just alluded to in the background has almost entirely disappeared from view. The movie even attempts to justify its own narrative blindness by means of an odd version of globalization theory’s thesis about the reduced role of the modern nation-state when M tells Parliament that “our enemies are no longer known to us, they are no longer nation states; they are now individuals…. and the shadows is where we do battle.” (Ironically, of course, their enemies are very much “known” to MI6, because apparently the “individuals” are former MI6 agents.)
Three quarters of the way through a very long movie, it appears that Silva’s postmodern, globalized insanity has got Bond and MI6 beat, so how is Bond to fight back? The answer is by going back in time, where, as Bond says, “we have the advantage.” And so we travel to Bond’s childhood home, Skyfall, a mansion in Scotland. To complete this nostalgic image, the old home appears to come with its own endearing old caretaker, Mr. Kincaid, who appears with a shotgun on his arm as if just back from a pheasant hunt. Here, a number of things are completely unique and new about this Bond film. First, this is the only time in Bond history that Bond’s childhood is a major part of the plot. In all other Bond movies, Bond’s life before he became an agent is totally absent, and it’s hard to imagine him anything but, as if he sprang like Minerva, a fully formed agent with tuxedo, martini, and Walther PPK pistol from the brain of Zeus (or, in this case, from the motherly brain of M.) Second, this is the first time that most of the explosions happen inside of Britain. Usually, Bond goes to other countries where he and the villain callously destroy much of that nation’s cultural heritage, but in Skyfall, both MI6 headquarters and Bond’s childhood home are destroyed (and please note the Freudian connection between his childhood home where his parents died and his adult home at MI6 where the life of his new “mum” — his boss M — is threatened.) Lastly, and most importantly, this is the first Bond movie where Bond cries, and over what does he weep so many tears? Yes, the death of his surrogate Oedipal mommy, M.
Since the death of M (mum) is the climax of the movie, we might think back to when Judi Dench was first introduced as the new M — not surprisingly in the last movie to also question Bond’s relevance in a post-soviet era, Golden Eye, when Judi Dench calls Bond a “sexist, misogynist dinosaur.” In the history of Bond films, GoldenEye represented a major turning point for three reasons. First, because it was produced after the longest gap in time between Bond films, as studios really did believe the genre had died with Timothy Dalton. Second, it was the first Bond movie to be produced after the fall of the Soviet Union and the Berlin Wall, so it very directly raised the question of whether MI6 and Bond were still relevant. Third, GoldenEye replaced the sexist, old-boys-club feel of the earlier Bond movies with more progressive roles for women, including Judi Dench as M, a more outspoken and capable “Bond-Girl” (e.g., Natalya Simonova, played by Izabella Scorupco in GoldenEye). By the time we get the new Casino Royale and Quantum of Solace, the pathetic, Bond-worshipping Moneypenny character has also been dropped from the story. Curiously, while in her first movie, Dench as M criticizes the old agents like Bond, in her last movie she defends them, and she defends them just in time to signal a return to the arrangement of the older Cold-War-era Bond movies with a new male M and a doting Moneypenny. What excessively Freudian Skyfall stages is the death of the “phallic mother” (M).
I’d like to pause for a moment to emphasize the paradox and the curious contradiction. On the one hand, critics are saying this is a “new” Bond movie (which of course annoys me, because I argued that it was Quantum of Solace that was the “new Bond.”) But on the other hand, it is a movie that nostalgically gestures back to the older films and performs a wish-fullfillment fantasy of a return to an older world order.
But of course we can’t go back, and what really makes this movie “new” and interesting is the troubling Freudian discovery that it can’t go back. Bond blows up his childhood home, which he says he has always hated, and its image burns like the ghost of history, an uncanny and very un-Bond-like image that haunts the movie’s end. This is wonderful cinema. For a full minute of screen time, everything is dark except for this burning house. In addition, even more important than the destruction of Bond’s two homes (his childhood home and MI6 headquarters), I’d like to suggest that one other aspect of this movie also undermines the desire to return to a simpler time. As some critics have noticed, the “Bond girl” Severine was the victim of sexual abuse and human trafficking when she was just a child. Bond’s discovery of this, and Severine’s self-betrayal, is perhaps, the most interesting moment in the film — the only moment of a troubling Real of globalization in the entire movie which is otherwise little more than a Freudian fantasy. Actress Berenice Marlohe is brilliant here, her whole body trembling with fear, rage, and hate towards the world order that the movie represents. And for both Lacanian and Foucaultian theorists of the Real and of the body, it is important that it is the actress’s body that communicates this. I assume that the horror of this scene is meant to dramatise what a horrible villain Silva is, but the horror is so great it almost overwhelms the whole movie. As dozens of scholarly articles on James Bond have noticed, Bond’s relationship to women is, of course, symptomatic of the fallen British empire’s relationship to the world. We may recall that what was totally unique and unprecedented in Quantum of Solace was the chaste relationship between Bond and the Bond-girl, Camille Montes, with whom he does not even try to have sex, but instead gives a brotherly peck on the cheek. Instead, in Skyfall, what is unprecedented is that the history of Severine’s exploitation is admitted, and the tragedy of her situation more painfully understood. In a way, both the excessively chaste Bond and politically radical Bond-girl in Quantum of Solace and the realization of Severine’s history in Skyfall are two sides of the same coin — the horrible Real of globalization that can no longer be properly sexualized and neutralized by a debonair hero. In truth, it is Severine who is the tragic heroine of globalization in this movie. Bond is not.
Let me explain why not. Traditionally, most Bond films end with both Bond and the Bond-girl together in each other’s arms, but at the end of the new Bond, Severine has died, Moneypenny has been transformed from a badass agent to a cheerful secretary, and the woman in Bond’s arms is his mommy, M. If I may make a joke on Newsweek‘s infamous cover story in 2009 after the government bailed out the auto industry, “We’re all Socialist Now,” we might speculate that if the popularity of the latest Bond movie says anything about our culture today, as it anxiously looks ahead to a troubled brave new world, it says that “We’re all Children Now.” At the beginning of this essay, I promised that I’d say something about why Craig’s brooding style is more appropriate for the new Bond than the adolescent humor of the old Bond — Craig is a lovable, angry child.
A couple of months ago, when I was moving to New York from Minnesota, and doing a lot of cross-country driving, I noticed that two of the most often played pop hits on the radio were Nicki Minaj’s “Starships” and Fun’s “We Are Young.” And I’m not embarrassed to admit that I quite like both songs. At some point during the many hours on the road, I began to ask myself what about these songs were so appealing. What made them so popular? And I began to entertain the notion that they seem to express the way young people today have reacted to the long economic recession. However, after I got to New York and started building my new life. I sort of forgot about the many random speculations I had on my long trip and didn’t think any more about it, until a month later, when a bunch of journalists, e.g., see [here], started asking questions about Nicki Minaj’s politics after a recent performance in which she rapped “I’m a Republican voting for Mitt Romney, you lazy bitches is fucking up the economy.” Personally, I didn’t think that lyric was an indication of a political position one way or the other. Few people would assume from her performance of “Roman Holiday” that Minaj believes herself to be possessed by the devil or is a member of the secret Illuminati order, so why take one line from another song and attempt to construct a partisan position out of it? Nevertheless, the relationship of pop music to political ideologies and economic issues is a question that interests me. The kind of reading of the songs that I am doing here is what cultural and literary theorists, from Louis Althusser to Stuart Hall, call a “symptomatic reading,” and I want to contrast “symptomatic reading” with something the journalists seem to me to be doing and what I will call, for lack of a better phrase, “ideological reading.”
The songs by Fun and Minaj are both about partying and having fun; as is typical of pop songs, the chorus and the verses seem to contain opposite messages. For instance, listening casually to the song “We Are Young,” the chorus that goes “Tonight, we are young, so let’s set the night on fire, we can burn brighter, than the sun” would seem to be a celebration of youthful desire. The driving, anthemic music contributes to this feeling.
However, reading the lyrics of Fun’s “We Are Young” verses tells the opposite story. The song is told from the point of view of a young man who has, apparently, physically abused his girlfriend in the past, probably while intoxicated, and now they are both again so drunk at the bar that they need someone to take them home. Not only does the story the lyrics tell haunt the chorus, but also the anthemic style of the music is beautifully haunting as the music’s notes drop at key moments to create a depressing, dark counterpoint to the hopeful message of youthful desire.
Nicki Minaj’s song is similar in the way its form contains contradictory ideas. The music is club music, with a strong beat for aggressive dancing, and the chorus seems to promote the party at which, we might imagine, the song would be played: “I’m on the floor, I love to dance, so give me more… Starships were meant to fly, Hands up and touch the sky, Can’t stop ’cause we’re so high, Let’s do this one more time.” Brilliantly, these lines seem to tell the music, and therefore also the bodies of the listeners, what to do; put your hands up and dance (and also drink) one more time.
However, like in the song by Fun, a closer reading of Minaj’s lyrics reveals its dark, cynical irony. The character that the song is about is someone who will “blow all my money and don’t give two shits” and “ain’t paying my month’s rent.” Not only does the song make fun of itself, but it is also a perfect synthesis of form and content in which the lyrics and the music seem to be having a conversation. The music, lyrics, and video all express longing for escape, as they fuse drinking, sex, vacations at primitive beaches, and starships. The idea of the starship as a utopian escape from a frustrating reality has a long history, from Parliament’s “Mothership Connection” to Kanye West’s “Spaceship.” My favorite moment is when Nicki Minaj sarcastically quotes the famous children’s song “Twinkle twinkle little star” after the character says you can “fuck who you want” – a juxtaposition of vulgarity and innocence that indicates just how much the song’s character is lost in space, pursuing her childish dreams of fun.
So, what are the politics of their songs? An ideological reading would have a hard time locating any political view, since both songs express the desires, frustrations, and contradictory feelings that people have. Nicki Minaj presents us with a Barbie-doll image but seems to mock it at the same time. How do we begin to analyze the politics of having fun, poking fun, and dropping puns?
In contrast to an ideological reading, a symptomatic reading will put the song in its socio-economic context, observe how the symbolic content of the song expresses the psychologically repressed problems, and observe what about those problems are absent from the song. In other words, both Fun and Minaj’s songs seem wonderful expressions of the frustrations and desires of young people in the midst of an economic recession. Fun’s song focuses on an abusive drunk, but neglects to explain what provokes a man to be abusive and to assert his identity in such a way. Moreover, why do we feel we can relate to this troubled character? Minaj’s song focuses on a party girl who is – as so many Americans discovered in 2008 when the economy crashed – in chronic debt. One effect of this recession is that the “youth unemployment rate” (ages 16 to 24) is very high, and it seems to me that the sort of schizophrenic nature of Fun and Minaj’s songs is an indirect response to these troubled times. In my view, Minaj’s lively wordplay is somewhat more attuned to the broader economic problems than Fun’s more anthemic style, even though it lacks the emotional content that Fun’s song has. Both songs, I believe, are wonderfully symptomatic of the contradictory feelings we have about the current economic recession much in the way that a runny nose and sore throat are symptomatic of the virus that causes them. However, in saying that, I don’t want to suggest that the songs are merely symptoms and therefore naive and stupid, because I actually think the lyrics are quite sophisticated and self-aware enough to draw attention to the problematic of the contradictory feelings we have in our twenty-first century consumer-driven society that demands of us that we all believe we are special despite our lacking the means to be truly special. However, their “diagnosis” of these symptoms (if I may continue the medical metaphor of my mode of reading these songs) merely notes the contradictions at play in the way we live our lives, but not the deeper viral problems that are the root of them. In the end, there is no escape from our pathetic lives except for the fantasy of the escape narrated in the song that is already structurally a part of our lives.
So, what are Nicki Minaj and Fun’s politics? Heck if I know, but this question is an entirely different question than the question of the political problematic of their songs.
The death of Ethiopia’s Prime Minister Meles Zenawi, announced earlier today (August 21, 2012), has precipitated a storm of questioning and speculation about who the next Prime Minister will be and whether there will be a significant shift in the relations of state power. Even before his death was officially made public, his disappearance from view for the past two months prompted many to wonder what was happening behind closed doors. For now, the Deputy Prime Minister Hailemariam Desalegn has officially assumed responsibilities, as Ethiopia’s Constitution specifies (article 75). According to an ABC report [here], the Council of People’s Representatives will decide sometime this September whether Hailemariam will retain the position for the rest of the term until the 2015 elections. Hailemariam’s position is somewhat weak because he was appointed to this leadership position only two years ago after the 2010 elections and doesn’t seem to have much influence with many important constituencies. Some question whether he can control the military. Obviously, this is a very important moment in Ethiopia’s history considering that Meles has been its Prime Minister since the very first election after the constitution was ratified in 1995 and has actually held de facto power since the Revolution in 1991.
There are numerous lines of inquiry that one can take, but the questions that I would like to focus on are these: (1) What constitutional ambiguities does the Meles’s death expose, if any? (2) What does the American response to Meles’s death tell us not only about U.S.-Ethiopian relations but also about American culture? Indeed, many Oromos in the United States have been wondering why American newspapers and the American government have been so silent on Meles’s disappearance from the political scene for over a month, seemingly waiting for some official announcement (like the one today, [here]). Before I continue discussing this issue, I have to admit that I am no political scientist, and I usually find contemporary Ethiopian politics to be a confusing maze of acronyms. I am writing this blog largely because my past involvement in Ethiopian and Oromo issues has led several of my friends to ask me what I think about this. In answer to that question, probably the best thing I could do is simply refer them to this excellent analysis published by Jawar Mohammad the day before Meles’s death was officially announced. So, read Jawar’s piece for a political analysis. As for myself, what I have to offer as a scholar of literature and language concerns the narratives that people make in order to make sense of what’s going on and the blind spots that those narratives create. The only blind spot in Jawar’s piece is the role of foreign governments in the politics of his homeland, but that is an issue that, lacking concrete evidence, Jawar was perhaps wise to avoid, since one can only theorize about it — and theorize I will do.
Let’s look at the constitutional question first. The official narrative of Ethiopia that the constitution tells is the narrative of a democratic federal state that shares power among various constituencies in various ways. In some ways, the Constitution’s language seems excessive, giving far too much detail about the procedures and duties of each office, as if it has to illustrate its democratic qualities by spelling out each thing governments might do, and yet, at the same time begging the question of why some things are not included on the list. In the midst of this excess, there is at the same time a lot missing. Importantly, there is not much in Ethiopia’s constitution about an official line of succession, except to note that the Deputy Prime Minister represents the Prime Minister in his absence (article 75). Significantly, the Constitution neglects to say how the Deputy Prime Minister is appointed in the first place. The fact of this seemingly absent process may explain why so many of my Oromo friends on Facebook have been speculating for the past month about secret negotiations and politicking behind closed doors. The problem of a line of succession is certainly not unique to Ethiopia. The language in the U.S. Constitution was originally vague about the position of the Vice President and also unclear about who would fill the position if both the president and vice president died or were removed from office. However, in the case of Ethiopia’s constitution, we find a very slippery language throughout. On the one hand, the Constitution emphatically asserts a transparent (article 12) and accountable (article 72) government by elected representatives (called “councils”) of the people. And to be sure, the Prime Minister and his various officials are beholden to the Council of People’s Representatives (article 72 and 77). What is slippery is how much power the Prime Minister is granted by the Constitution, including the power to “supervise” and organize the activity of the councils and the many important positions that are appointed rather than elected (article 74). For instance, it has long been noted that the office of the President is purely ornamental, being merely appointed by the Council of People’s Representatives and having no formal power whatsoever (article 71). Notice that nobody is even considering the President as candidate for any future office; why would they? The office is little more than an empty symbolic gesture. But considering the politically weak position of Deputy Prime Minister Hailemariam, and considering the ambiguity and lack of transparency in how such political appointments are ratified, it seems to me that the issue of the line of succession needs to be revised and amended.
The second question is America’s response and the narrative Americans tell themselves. It would seem that America is just finding out about Meles’s situation today in the New York Times even though people who are invested in Ethiopia’s politics have been wondering about Meles’s health for a long time. Even the British newspaper, The Guardian, wondered about Meles’s apparent disappearance two weeks ago. It would seem that American newspapers held off reporting about Meles until after they received official word from the Ethiopian government. What do we think about American newspapers’ apparent lack of journalistic tenacity? I want to suggest three possible viewpoints. We could simply chalk this up to the general lack of concern Americans have for other countries. We might call this viewpoint the “innocent ignorance” viewpoint. The opposite viewpoint is that the American government was so heavily invested in what was taking place behind the scenes that it actively suppressed all discussion in the mainstream press. We might call this the “paranoid conspiracy” viewpoint. Neither of these viewpoints seem reasonable to me. The first avoids the obvious fact that lots of people were talking about it and the other obvious fact that American newspapers often speculate wildly about the regimes of other countries before receiving official word from the governments of those countries. The second assumes all sorts of unprovable things and forgets the more mundane workings of state institutions (e.g., the constitutional procedures for temporarily transferring power and the state bureaucracy that actually does most of the work, whether or not anyone is actually “leading” it.)
I think the best way to go about thinking of this issue is to compare the Ethiopian situation to similar situations in other countries. Without belaboring the point, we can easily recall the constant speculation (much of it irresponsible) about the health and stability of the leaders of other nations, such as Cuba, North Korea, Iran, Syria, Libya, etc. What do all these nations have in common? Obviously, there is conflict between them and the United States. So, what’s the simple moral of the story here? This is not so clear. One might speculate that the press follows the political interest of the U.S. government, so if the U.S. government is interested in undermining another government, then the press will jump right in and say as many nasty things as it can in order to justify American hostility, meddling, invasion, etc., but if the U.S. government is interested in supporting another government, then the press will hold its tongue and be polite. After all, President Obama today praised Meles and has never acknowledged the many human rights abuses perpetrated by his government, e.g., see [here]. But it’s hard for me to buy into the theory that the press is simply the yapping dog serving American political interests, even if we might clarify that we mean that it serves American corporate interests (i.e., the corporations who pay for the advertising), not that it serves the elected government (i.e., Congress). What seems somewhat more likely to me is that the press makes a careful calculus about what sort of journalism is both profitable and safe. Hence, when we look at the narrative the press tells about Ethiopia, we see that it is significantly different from what President Obama officially says. The press is emphatic about two things: first that Meles was a successful leader who reformed Ethiopia in positive ways and helped transform Ethiopia into a democratic, prosperous nation, and second that he was a ruthless, oppressive autocrat under whose rule democracy floundered and human rights were constantly violated. How Meles could be both those people at the same time is hard to figure out, and so the press has to be very careful about where these two images of Meles come from. It has to appear “fair and balanced” after all, yet all the while revealing very little.
And what I mean by revealing very little is that the press so often seems to avoid actually investigating some of the roots of the issue — not only the constitutional question that I raised, but also the very troubling relationship between American foreign policy and Ethiopian domestic policy that has been going on since the Clinton administration and only seems to get worse. I often find myself wondering if journalists ever go to the library and look stuff up before they start reporting on it. And in this case, what has long troubled me is something the press never talks about, and that is the degree to which the United States supports Ethiopia with money and weapons in exchange for political favors, such as the attack on Somalia in 2006.
However, I want to be clear here. I don’t think either the press or the American government has a clear agenda with regards to Ethiopia. So, when I say that the U.S. is supporting Ethiopia, I’m not saying that this line of support is consistent or unilateral. It is, in fact, symptomatic of many of the classic ideological contradictions that Karl Marx long ago observed in capitalist, colonialist countries who propagate a set of conflicting values. Americans want democracy in Ethiopia, but they also want a secure state friendly to American business interests. Americans want pluralism and tolerance worldwide, but they also ally themselves with some groups against others, in particular against those others who desire a government according to either Islamic or Socialist principles. Americans want economic development in Africa, but not competition from Africa. The point being, what will never be fully addressed in the American media is the full relationship between Ethiopia’s line of succession and America’s very confused sense of itself and its own interests. As James Ferguson’s book Global Shadows: Africa in the Neoliberal World Order demonstrates, it is hard for both Americans and Ethiopians to think beyond the category of the nation-state when we are assigning responsibility for political and economic problems and speculating about possible solutions.
I sometimes read the columns of John Feffer, one of the founders and co-directors of a think tank called Foreign Policy in Focus, whose informed analysis of world events I generally appreciate and respect. For the sake of full disclosure, I might also add that, about fourteen years ago, before this think tank existed and before I attended graduate school, both John and I lived next door to each other in Tokyo, Japan, when we both worked for Quaker organizations – I was teaching English at the Tokyo Friends School and he was working for the American Friends Service Committee. Since I usually agree with the stuff John writes, I was more than a little surprised when I read one of his recent blog posts entitled “Scram!” and found myself shouting angrily at the computer screen. In his blog, John pretends to give a college commencement speech, and following the conventions of that genre, he reflects upon the purpose of education and gives practical sage advice, and of course, in doing so, he makes several political statements about the state of education today. I have no problem with the main idea of the article, but in the process of articulating it, he makes some rather cynical and disparaging cracks at college culture. Of course, John is not the only one to use this seasonal moment as an opportunity to say something about the institution of higher education. The end of the school year has inspired both President Barack Obama and Mitt Romney to attack each other on various education policy issues in various speeches, including an actual commencement address Obama gave to Barnard College and Romney’s release of his education plan, which has been discussed and debated in the higher education newspapers [here] and [here]. Almost all conversations about college education today point out the rise of student debt at a time of economic insecurity and rising unemployment. This year, college graduation has become something of a political football, tossed and punted around by media pundits hoping to score political points.
In sorting through the various perspectives and statements on the state of higher education, and in coming to terms with my surprise and anger at some of John’s comments, I am reminded of the theoretical notion about ideology made by the influential cultural theorist Stuart Hall. In one essay [here], he argues that ideology is not such a simple correlation between a social class position and a way of thinking — e.g., the ideology of the proletariat vs. the bourgeois, Democrat vs. Republican, or college student vs. banker. Instead, he urges a theorization of difference in which we recognize “that there are different social contradictions with different social origins; that the contradictions that drive the historical process forward do not always appear in the same place, and will not always have the same historical effects.” In other words, even though John and I may basically agree on most things, we may come to our beliefs in different ways and occupy different social positions in relation to our ideology as well as in relation to the various contradictions in our ideology.
Let me try to explain what I mean about Hall’s complication of the concept of ideology by first acknowledging how John and I agree. His main idea and advice for college graduates is for them to go to a foreign country, get some experience, learn a foreign language, and come to a different understanding of the world. In light of the changing socio-economic conditions that many journalists and scholars (including myself) call “globalization” and in light of the recession that began in 2008, I have often given this same advice to my own students and have used almost the same language as John. “Get out of the country,” we both say. “Make some money to pay off your college loans and get some experience to help you figure out what you can do.” This doesn’t just mean teach English, as I did. It could mean some sort of service, as John did. This goes along with some advice I wrote to graduating students in this blog way back in 2009 [here], specifically for my English majors who were confused about their career options and graduate school. Of course, our viewpoint here is not so original. Colleges and universities themselves are increasingly promoting “global citizenship” and study-abroad programs, as such experiences and skills are increasingly seen as necessary in today’s competitive environment. So, there is really nothing especially controversial about this view, whether you are a Republican, Democrat, Libertarian, Marxist, or Green. There is, however, something troubling about this view if you are poor — since the version of “global citizenship” and the way such cultural experiences are credentialed is somewhat exclusive.
And here is where I get critical of how John arrives at his position, and my point here, relating to the theory of Stuart Hall, is that we may all agree about the value of international experience and skills, but we may arrive at this viewpoint differently and articulate the contradictions of our ideology differently. John makes some rather cynical statements about the value of college education which he compares to a social club for rich people. Curiously, this comparison is part of his argument that contrasts the uselessness of college education with the usefulness of international experience. Hence, John’s argument relies upon a somewhat common (and, in my view, false and misleading) binary opposition between an exclusive ivory tower of spoiled brats and the “real world.” Ironically, of course (and here is an instance of the ideological contradiction I mentioned earlier), international experience is perhaps even more an exclusive opportunity for the rich than a college degree is. Many of my own students who come from poor families have remarked that a lot of the international and service opportunities (e.g., Peace Corps, AmeriCorps, various non-governmental organizations, etc.) with which their wealthier classmates pad their résumé’s are unavailable to them. They need to make money right away, and often do so with the less glamorous sorts of jobs.
This is not just a minor point, since the entire thrust of John’s argument relies on the contrast between a somewhat useless and exclusive nature of college education (useless in terms of any real skills or knowledge, according to him) and the usefulness, openness, and inclusiveness of “real” international experience. He claims, “college is more about socialization than about education.” Here his argument relies upon a binary opposition between socialization and knowledge, as if the two aren’t intimately related (as I demonstrated in my last blog post about intercultural competency). In fact, the way one goes about gathering data and interpreting it is an ethical act that requires the student to be able to question their own assumptions. Later, making the assertion that the infamously exclusive Skull and Bones club is a microcosm of the whole college experience, John claims that “college is a lot like the club of advanced industrialized nations” where the elite cater to the elite and exclude others, and he makes this claim without any acknowledgement that international travel may be an even more elite activity than college.
Repeatedly in question in John’s blog is what college faculty and administrators do. For those of us who teach, interact with students each day, work hard to design curriculum, and work with administrators to address the problem of social inequalities among our students, we tend to think that we are making a genuine effort to accomplish exactly the opposite of what John claims the institution of higher education does. Most colleges aim for their campuses being a “microcosm” of the whole society, not a microcosm of an exclusive club, and colleges support this as best they can with need-based financial assistance and programs to recruit first-generation students, immigrants, etc. However, what is disturbing here is that John’s stereotype of college culture is not supported by any evidence but is instead cloaked in the authority of a rather simplistic Marxism (exactly the sort that Stuart Hall is critiquing.) His stereotypes sound plausible, as strereotypes so often do. However, they are not true. Students actually learn quite a lot. Speaking as someone who actually labors in the trenches of higher education, the examples are numerous. In their first semester, students learn how to do research, evaluate sources, and compose a long research paper. This is very difficult for most of them when they arrive, but by the end of their first year, they can do it, and this is a very valuable skill, as I’ve argued elsewhere [here]. It is one of the joys all teaching when we see students accomplish something that they weren’t able to do before.
In addition to such skills (and there are many such skills), there is also a lot of knowledge and ways of processing knowledge. One of the things education abroad offices and college professors have discussed at length is how students learn (or sometimes don’t learn) from their experiences in other countries. Contrary to John’s assumption, most of the real learning happens not when the students are in the foreign country, but after they return to college, when they read, write, discuss, and process their experience. And this is especially important because what a middle-class white person (such as John and myself) usually experiences in a foreign country is a rather small piece of it. These travelers may not be aware of the socio-economic forces that produced their experience, and they may not ever see other aspects of the society, as I have argued in my blog about my trip to Japan with students [here]. Even someone whose job is to understand the whole society in which they work might be exposed to only a part of that society and come away from their “experience” with a rather ideologically warped understanding of where they were, as I have argued in my blog about my trip to Kenya with several of my colleagues [here]. There are things that a cultural historian, sociologist, and economist may reveal about a country that no personal experience will ever get, and although John seems to think that the various courses that students take during their four years of college have no connection to each other, actually the point of the liberal arts education is that they learn different ways of understanding the realities of the world that are usually invisible to us. College professors labor very hard revising and revising and revising college curriculum so that it is more inclusive and more effective at leading to exactly the sort of global understanding that John’s blog promotes. It is one of the joys of teaching when we see a student’s face light up with a transformative new understanding of the world they live in and when they make connections between what they learn in one class and another.
So, coming back to Stuart Hall’s point about ideology, we can see that while John and I might both agree that an international understanding of the world is important (a view with which almost all college administrations today also agree — so much so that it would seem both John and I reflect the very hegemonic ideology that we think we are critiquing), we arrive at that view very differently. Moreover, I would argue that the way he makes his politically leftist argument ironically has a lot in common with the arguments that the politically far-right make about higher education. Many of their arguments assert that public universities are sites of liberal brainwashing and socialization, and not about real content. (I have written about this at length [here]). Their goal is clearly to discredit the scholarly work that professors do, and their goal is a financial one — to shift funding away from publicly funded, publicly managed education to privately funded, privately managed education. My own problem with this is that privately funded education is, of course, like privately-funded think tanks, subject to the whims and biases of private money (often the interests of corporations and bankers.) What worries me about John’s disparaging comments about higher education is how similar they are in effect to the beliefs of the very people John is most ideologically opposed to.
And of course, when it comes right down to it, the real issue is the money question. All of the hysterical assertions in the mainstream media about the quality of education, how little students are learning, the content of their curriculum, the usefulness of a college degree, and the effectiveness of the delivery mechanism (i.e., on-line education, lecture hall, group learning, etc.) is largely smoke and mirrors. The real issue is an economic one, and has to do with inflation, cost of education, competition, etc. The often-cited graph is this one in which the cost of education has gone up so much faster than the rate of inflation.
Although Romney and other Republicans argue that the cause of this rising tuition is the federal government giving loans to students (huh?) and that fostering competition between colleges will drive prices down, it is more likely that the heightened competition is exactly what has driven prices up, as colleges pour money into new facilities, special programs, etc. I can tell you that the increase in tuition dollars does not go into professor’s salaries or the classroom experience. Rather, it goes into all the special things that make a school attractive and competitive. We might laughingly speculate that one of the things that has driven up college tuition is their effort to become more “global” in exactly the way John prescribes. The real concern here is how we finance a broad-based, accessible liberal arts education in which students from all backgrounds are exposed to a lot of different ways of looking at the world and learn a lot of valuable skills. In my view, it is dangerous to fancifully imagine replacing the liberal arts college and public university with something else, as John seems to do, because what is most likely to replace it is a corporate-driven agenda that merely trains young people to do the things the stockholders and CEOs need them to do. This is not the road to real prosperity, and it is not the road to a just and equitable society. Given the obvious gaps and holes in our culture that will likely widen with such a corporate-driven agenda, we can anticipate what will fill those gaps — not the objective, hard scholarly work that happens at the research university but the biased and narrow agenda of sectarian groups that fight each other.
For the past decade or more, colleges and universities around the country have been revising their curricula to include something that the administration likes to call “intercultural competency” (something I have written about twice before [here] and [here].) The idea is so prevalent that there is even a wikipedia article about it, as well as many textbooks such as this one [here] for majors in Communication, Elementary Education, Business-Management, and so on. The basic idea is that in the pluralistic, multicultural world in which we live today, college graduates are more likely to work with people of other races and people from other countries than ever before, and therefore colleges ought to prepare its students. To put it another way, the administration is catching up to what literature professors have been doing already for the past thirty years and to what the Civil Rights movement campaigned for half a century ago. Different colleges have implemented this requirement differently, some requiring just one class on the appreciation of difference, and others requiring two distinct classes, one on the subject of racial diversity within the United States and the other on international relations and cross-cultural dialogue. Both versions have their strengths and weaknesses, but those will not be the subject of my blog post today. Significantly, one of the most popular television shows of the past decade among the undergraduate population, The Office, devoted its second episode to the subject of intercultural competency. In this satirical comedy, the more the boss tried to be interculturally competent by instituting “diversity day” at the office, the more he exposed how incompetent and culturally insensitive he actually is. The show is more than just a parody of the impotence of badly managed intercultural competency; it is also symptomatic of the psychological anxiety many Americans still have about the issue. As Freud points out in his famous essay on jokes, it is that anxiety that makes the joke culturally resonant and makes us laugh, even though the joke typically displaces that anxiety onto something easier for the audience to deal with emotionally.
As the episode of The Office and Freud’s essay suggest, what might make a class on intercultural competency hard to teach – and even harder to institutionalize – is that it is not simply a set of information that the student must learn. Rather, it asks that the students come to terms with their selves – their biases, desires, privileges — some of which may be conscious, some of which may be unconscious. And as all my students in my introduction to theory are well aware, once the notion of the “self” is posited as an important dimension of the curriculum, then things get tricky. Often the course may enter uncomfortable territory not because the student is encountering a new, foreign culture (as many administrators wrongly believe), but actually because they are encountering uncomfortable things about themselves that they already know but don’t want to think about. For instance, ideally, we have all been taught that an equitable society for men and women of all colors and creeds is desirable, but at the same time we also recognize that this is not in fact the case and that there is a huge gap between the ideals of our society and its realities. Some of us may have privileges, opportunities, and good fortune that others don’t have. Everyone is aware of this gap, but few want to confront it. Notice that this discomfort has to do with a political and sociological difference, not a cultural one. Hence, the very conceptualization of “intercultural competency” is already a problematic displacement of a thorny political question onto a cultural schematic. In my view, the fact that many people naturally gravitate towards the familiar and avoid the unfamiliar isn’t enough to cause discomfort in intercultural competency classes; rather, what’s uncomfortable is the things about ourselves and our world that we are all too familiar with but would prefer not to take responsibility for.
Now, I don’t want to claim that intercultural competency is all about the psychology of the student, or that it’s all about the equally problematic psychology of the institution. Most of it is about the appreciation of different cultures along with the history of race relations and/or international relations, including histories of colonization and imperialism. Such is the manifest content of class work — the stuff one studies. However, the latent content of the class is its meaning for the self. And because I just taught my intro-to-theory class the essays of Freud, Lacan, and Derrida, I am deliberately using terminology from Freud’s Interpretation of Dreams that distinguishes between the “manifest content” (or the stuff in the dream) and the “latent content” (or, what the stuff means.) In other words, the stuff on the syllabus for any class is naturally going to be primarily content about different cultures, ethnicities, races, etc., and there are hundreds of ways to approach this content that reflect political and methodological differences among faculty. However, regardless of the content, the impetus behind the syllabus, or the drive that motivates requiring it, is of course the actual relationship of the student not only to people of other cultures or races, but also the student’s understanding of his or her ethical understanding of self and other. In some ways, then, the latent content of the class is ethics and psychology, even though most of the manifest content may not be either ethics or psychology. There would seem to be a slippage between the manifest content (the study of culture, history, literature, etc.) and the latent content (ethics, psychology, etc.). When the slippage between multiple subjects is considered, it is easy to see why this is a hard curricular requirement to wrap one’s head around. The course slides between the academic subjects of history, literature, culture, psychology, ethics, etc. Moreover, always grounding this linked chain of subjects is another subject — the self or “I” (note the double meaning of “subject” here), what Freud calls the “ego.”
(A brief theoretical joke for those who have read Jacques Lacan’s “Agency of the Letter in the Unconscious.” Please skip this parenthesis if you haven’t. Following the example of Lacan’s joke about the mathematical algorithm for Freudian psychology, we might give the variable subjects in the class a capital letter “S” for “Subject” and the variable stuff in the intercultural class a lower case “s” for stuff. Lacan’s algorithm for the relationship between language and the unconscious is that the function of the Signifier (S) is the relationship between the ego (I) and the signified (s), and we might jokingly say that the function of the Subject (S) is the relationship between the student’s’ ego and the stuff in the class. Consequently, whatever the variable Subjects (S) of the class are, their relationship to its various stuff (s) and therefore its very meaning in society is a function of ego, hahahaha, and this is why the Subject of a class on intercultural competency inevitably slides along a chain of Subjects, from culture to history to literature to political science to ethics to psychology, etc. In other words, in Derrida’s terms, the central point of intercultural competency is by definition absent and decentered along a signifying chain.)
Hence, if there is always a psychological component, whether or not this component is actually on the syllabus, I propose a psychoanalysis via Jacques Lacan. In his seminal essay, “The Agency of the Letter in the Unconscious,” Lacan argues that many pop psychologists mistakenly understand the unconscious id of Freud’s theory as instinctual drive. However, the unconscious is not instincts at all, but is in fact the aspects of our relationship to our own culture that we suppress and displace through complex symbols. His argument is long and difficult, but for the sake of this blog, I want to boil it down to three versions of the Cartesian cogito that Lacan explores. Following Lacan’s exploration of these three formulations, I will explore three versions of intercultural competency, each based on one of these formulas for the self’s relationship to critical thinking.
- I think therefore I am.
- I think where I am.
- I think where I am not; therefore I am where I do not think.
The first is the famous line “I think therefore I am.” Here the philosopher René Descartes explores our ability to doubt received wisdom and to question our very existence, but ultimately resolves this doubt by positing a thinking subject that exists. There is something wonderfully appealing and universal to this idea, and somewhat radical for his own time since it places the burden of rational and ethical thought on the individual subject rather than on God. The upshot for an intercultural competency class is that the students are given the tools to think critically about their own culture. Hence, the goal is to transcend the arbitrariness and randomness of culture instead of wrongly believing that your own culture is in some way normal or universal. What many philosophers have criticized, however, is the idea of a self-contained individual subject that is doing all this thinking. Most of us are aware that our thoughts respond to external stimuli, are derived from language that we have no control over, etc. Hence, in order to situate the subject in his or her environment or cultural context, Lacan humorously suggests a revision of the statement to “I think where I am.”
What are the implications of this second formulation for intercultural competency? Well, the first version implies that all human beings are rational thinking beings. There is something wonderfully universal about this, and much of the ideology of modern Europe was the believe in the universality of modern science and human rights. However, there are many challenges to this ideology. Most of the “rights” encoded in our Constitution and the United Nations Charter are individual rights. However, people don’t live alone; they live in communities, which has caused the United Nations to add “cultural” and “community” rights to its manifest. We are not just individual thinking subjects; we are also members of specific cultural locations. Hence, Lacan’s second versions of the cogito draws attention to the goals of intercultural competency typically set forth by college administrations: (1) to appreciate other cultures, and (2) to appreciate that one comes from a culture oneself. In other words, one may think that there is a “normal” way of doing things and a “normal” or “universal” way of understanding rights and responsibilities, or one may also think that it’s possible for a rational individual to transcend the arbitrariness of culture, but in fact our very way of thinking is conditioned by the circumstances in which we happen to live. To put it another way, when we think, we think with the various tools for thinking that our culture gives to us. And this is what Lacan means when he suggests that we think where we are. We think through our culture.
However, Lacan is unsatisfied with this for all sorts of reasons. First and foremost, it’s not true. We don’t think where we are. Culture is not so deterministic, and ethical values are not so relativistic. In fact, when we think of ourselves, we always do so in relation to other people and other spaces. For instance, men both desire and fear women. Our imagination of ourselves is always in relation to desires and fears, and an important contribution of psychoanalytic theory is that we don’t simply fear difference; we also desire it. If we think of the earliest examples of classic literature, they are always imagining the self in far-away locations: e.g., Thomas More’s Utopia, John Milton’s Paradise Lost, Daniel Defoe’s Robison Crusoe. The more popular example of James Bond movies take this utopian imagination even further as the secret agent Bond always expresses himself by seducing exotic, foreign women. Even the Bible begins by defining humanity in terms of a lost paradise. So, the notion of intercultural competency that suggests we simply appreciate where we come from and where others come from is somewhat impotent and wrongheaded. More dangerously, it is also deterministic as it assumes that a culture determines the way we think. Against such a deterministic understanding of culture, we can appreciate that even though two people may come from the same culture, they may also differ from each other in all sorts of ways. The human imagination is broader and more interesting than simply one’s cultural location. It is not simply a reflection of where one comes from. Those who posit a version of intercultural competency along the lines of “they have a culture and I have a culture and I must appreciate both” are not just factually wrong in really obvious ways; they are wrong in a way that is horribly unethical.
Hence, we have two paradigms: one is Descartes’s individual rational subject who doubts everything but seems unaware of the role cultural difference plays in his or her own thinking, and the other is overly deterministic in its premise that we simply think the way our culture teaches us to. Neither of these are satisfactory, and neither gets us very far. How do we reconcile these two very different iterations of the cogito – of how we think and how we recognize ourselves thinking?
To answer this question, Lacan concludes with this version of the cogito: I think where I am not; therefore, I am where I do not think.
This formula, I believe, provides a more useful and factually accurate understanding of cultural difference. On a very simple level, it helps students discover that stereotypes are not just incorrect understandings of others. Rather, it helps students understand how stereotypes are expressions of the cultural generating the stereotype — its desires, its fears, etc. In other words, we understand ourselves through metaphorical figurations of others. To give you a recent illustration of this, one author has angrily argued against something he calls White Savior Industrial Complex. This is a perfect example of the how Americans feel good about themselves by saving Africans, which is a feel-good situation that first requires that the Americans understand themselves in relation to a stereotype of Africans. For another example, I have written about Oromo ethnic culture in America and in Ethiopia with Lacan’s theory in mind [here]. Another example might be the popularity of “gangsta rap” in mostly white, middle-class suburbs. In addition, on a more fundamental, epistemological level, Lacan anticipates the work of recent philosophers of science and philosophers of mind that question the mind-body dualism of Descartes and assert that our mind thinks not only with our body but also with our body’s physical relationship to the world.
Therefore, if thinking always happens in relation to a world, then in order to understand ourselves, and in order to become ethical individuals, we need to understand the world, and here we come full circle back to the very impetus behind intercultural competency in the first place — the world we live in. In other words, the stuff (lower case “s”). However, it’s not enough simply to study the stuff, because how we imagine the stuff is crucial. In other words, the Subject (upper case “S”) directs our understanding of the stuff (s). If Lacan is right, that we understand ourselves through our rather metaphorical imagination of others, then the question of how to teach the psychological component of intercultural competency is key.
What I think is cool about Lacan’s formula for the relationship between thinking and selfhood is that it opens up the slipperiness of identity, the possibility of change, the role of the imagination, the necessity of self-criticism, and the recognition that we are in essence incomplete beings. Think about it. Why do we both fear and desire others? Because we are at root dissatisfied with ourselves. We are incomplete. Hence the metaphor for marriage “better half” and “she completes me.” Intercultural competency is, in part, a quest for completeness and a meaningful life.
In conclusion, and to return to the episode of The Office about the bumbling attempt to overcome stereotypes, we can see the boss articulating his own identity through various personas — Chris Rock, Martin Luther King, Jr., etc. In other words, he believes he is leading his office to become interculturally competent because he is endorsing black celebrities, but his imagination of himself through them is precisely what is offensive because his starting place is the assumption of difference and the fetishization of specific tropes (metonymies) of that difference. The show clearly indicates that this is foolish, but unfortunately, the show does not give us any positive indication of what might be better. The show does not provide any space for individuals to actually have a real conversation about difference. In order for a class about difference to be meaningful, the starting point needs to be the extent to which we are interdependent, incomplete individuals. On a very basic level, I rely on others for food, clothing, shelter, knowledge, culture, etc. Where does it all come from and how does it move? Why don’t I want to think about the conditions in which my T-shirts were made and the extent to which my identity depends on the teenage girl in Mexico who made it. That’s the starting point.