Daniel Hayes

“The Best Thing About Religion Is That It Makes For Heretics”

Where have we arrived?

That’s the question that comes to mind, after already having written six entries on the issue of secularization (and religion). Perhaps it’s better to ask, Where have I arrived? And I wish to write one more entry, after this one, dealing with things more personal. But for now I want to see where my thoughts on secularization take me…theoretically.

As mentioned before, Charles Taylor argues persuasively against what he calls the “subtraction theory” of secularization, whereby we rid ourselves of superstition and bad science and arrive at some neutral site cleansed of the sins of bad thinking. For Taylor, secularization is not only not neutral but a peculiar outgrowth of axial religion, and particularly Christianity. Taylor, a Christian himself, is also famous for thinking that this development allows for a religious viewpoint to continue to co-exist, if in altered form. I have less sympathy for that argument. But let’s just agree that secularization is a very messy thing, not aimed in a particular, predictable direction, and it constitutes a world viewpoint not much different in form than religion (which isn’t surprising, given its offspring status).

I think I have, until recently, adhered to a view that merely wished, via Taylor, to point out the hypocrisy of those who think of themselves as somehow getting beyond religious categories. Adherents of an atheist liberalism are here the main culprits, since they hint at a view of life that doesn’t necessarily include religion or other (intolerant) foundational categories. According to this idea, if remnants of religion persist, they serve as a temporary inconvenience. But liberalism—or, more generally, humanism—seems to me more a version of three-card monte. Or a warmed-over Christianity, sprinkled with the pursuit of happiness and sacred notions of human dignity. No wonder, then, that a good secular humanist like Philip Kitcher can take a walk with his Christian friend and find themselves disagreeing about so little: they belong to the same tradition. (And from a Martian’s point of view, the differences are insignificant—more a question of style than anything else.) According to this anti-humanist view, Kitcher’s sin is that he doesn’t go far enough down the Nietzschean path of shedding himself of new religions. He’s a backslider, without knowing it. He’s a foundational thinker without wanting to admit it, whereas I was under the impression what we wanted, as good atheists, was to forego conventional, foundational ideas about the good.

Well, my thinking on this has changed, at least a little. No, it’s not about how I’ve willingly become a secular humanist. (I’m kicking and screaming.) And it’s not about how religion is inevitable, or how it’s impossible to come up with moral categories without resorting to religion, or how we have a spiritual side deep down whether we like to admit it or not. No—it has more to do with a simple recognition of a complicated historical predicament: that is, where we’ve landed, so to speak. We are caught up in a dualistic situation—a clear legacy of our axial past—and it’s best to just admit. In other words, we can’t think out of the box because our vocabulary is of the box. I don’t mean this in some ontological way, or even in some Wittgensteinian sense, but in a very, very historical way. I can imagine, and I’m downright certain of it, that there will be a time when this human predicament that I’m describing—this little dogleg in human thinking—will appear as an historical oddity, with thinkers of this a-long-time-from-now time smiling at our squirming, our wriggling, our discomfort.

Thinking psychologically, I would say that our predicament is akin to that of a patient in psychoanalysis (or at least the more modest post-Freudian versions). It’s not true that “the truth will set you free.” It’s not true that you can know your unconscious, let alone significantly change your course with what limited knowledge you can gain about your unconscious. You are who you are. Your possibilities are severely limited. This isn’t to say that there’s nothing to be gained by recognizing who you are; and it isn’t to say that people don’t change, or that you can’t be affected by the process of psychoanalysis and feel, at the end of the day, in a better place. But your solutions are written with the language of your problems, and there’s really no way of getting away from that. Dispensing with foundational thinking, at least at the present time, is like pulling yourself up with your bootstraps. Though that’s not right—it’s less a question of effort (or physics) and more a question of the limits of imagination.

As usual, much of this thinking comes from a recent article I read. (I am, if nothing else, suggestible; for me, the passing of eyes across a page of words is an act of transference.) I’m thinking particularly of Laurens ten Kate’s article, “To World or Not to World: An Axial Genealogy of Secular Life,” in which the author suggests, in so many words, that we all humanists. (I cringe.) He relies mostly on a reading of Etienne Balibar, who argues that humanism is best understood as a common heresy. Not common in the sense of being prevalent, but common in the sense of being a heresy that has general characteristics that make is useful across the board, with various religions. According to this conception, humanism is not neutral ground, nor a universal umbrella, nor even one world view among others—but, instead, a mediating space that comes from inside a religious discourse. In speaking of believers, Balibar says that this heretic element, within a particular religious discourse, “must also ‘expropriate’ of their own singularity and disturb their certainty of being uniquely ‘true’ or ‘just,’ while not preventing them from seeking truth or justice along their own ‘path.'”

“The best thing about religion is that it makes for heretics,” Ernst Bloch wrote. In other words, religion develops. A humanist, in this view, is a Christian heretic intent on tinkering with what has come before. For ten Kate, this note of heresy is a good way of thinking about contemporary secularism because it both honors “the intertwining of secularity and religion” and suggests what amounts to “the lived practice of dual history”—that is, the peculiarity of our current situation. For Balibar, we are living within a crisis that began with axiality; but that “rupture with the gods does not simply turn humans into new gods, omnipotent, limitless and infinite as the gods were. No, in the world as saeculum, as other, humanity encounters its limits and vulnerability just as well.” Ten Kate speaks of this moment in time as “the twilight zone between human self-assertion and ‘transcendence,'” which gives way to “a humanism continuously negotiating between…humans and gods.”

This of course raises the question, at least in my mind, of what simple, unadulterated “human self-assertion” might amount to, if we were able to jettison “transcendence” entirely. But this question has increasingly become, for me, speculative. I think of answering it as less a theoretical stance than as an act of imagination. And I realize that it requires imagination because it’s not a matter of another act of subtraction (as in Taylor’s scheme)—getting rid of the transcendent, and leaving us finally in the company of ourselves. I may be misreading him, but it seems to me that the philosopher Jean-Luc Nancy is often suggesting that leaving us in the company of ourselves is a very tall order—at least right now. For Nancy, divinity is “a human relation to the outside”—that is, to the “other world” (whether we say it exists or not); and yet this is not a second world or even a world-behind-the-world as much as it is the “other of the world.” (Go ahead and try, Nancy seems to be saying, to act as though the “other world” doesn’t exist—here, there, somewhere.) I think Nancy is asking us to appreciate the depth of the duality that is implied in the seriously strange idea of Incarnation—an idea we can’t seem to easily shake.

Here I want to return to something I said a few weeks ago about ownership. I’d asked the question, Who owns the world? The Christian answer is quite simple. And the Nietzschean answer is relatively simple, though not quite convincing: we own the world (though, out of weakness, we’ve projected our ownership onto an imagined deity). But what if no one owns the world? What does that make the world? And for ten Kate, this is the heart of the problem: “This essential lack of ownership from both sides marks the crisis that the axial shift engenders ipso facto.” And here, ten Kate quotes Hans Blumenberg, who is quoting Hannah Arendt when he writes that “man has ‘removed himself from the earth to a much more distant point than any Christian otherworldliness had ever removed him.'”

In this scenario, humanity becomes a very lonely place—somehow held beyond the world, but not in the sense that God is beyond. As Blumenberg puts it, “When humanity lost its hope for a beyond, it has not, through the intensity of conscience freed up by this loss, entered the here and now; rather humanity, thrown out of the world beyond and out of the world here and now, was thrust back on itself.”

In this sense, we are homeless. It’s an existential thought, yes, but I’m thinking of it here more as a historical predicament that will, in time, lift and then become, as it were, senseless. (History is a graveyard of the senseless; there is, it seems, no other way to die.) For now, as Marcel Gauchet puts it, we live “a life in the world outside the world”; or, put differently, we live “in time that is outside time.” What a strange predicament! And, needless to say, this condition of ours suggests, besides the possibility of monomaniacal arrogance, a kind of whimpering vulnerability. It’s as though we’re cowering in the shadow of a nonexistent God, and yet now taking ownership of the world isn’t really one of our options. What renting might amount to is anyone’s guess.

Standard
Daniel Hayes

It Could Be Otherwise

Sometimes I feel very stupid. Or perhaps foolish is a better word. I felt this way at a certain point in reading Charles Taylor’s A Secular Age. Taylor was writing about meaning—or the meaning of meaning, and the genealogy of the concept. Taylor’s argument was that finding meaning in life, or considering the kinds of questions that seem so obviously relevant to someone like me (“What is the meaning of life?” “Does life have a meaning?” “Do we give meaning to life, or how do we find it?”), is a recent phenomenon. (Taylor also speaks of this emphasis on meaning in his Sources of the Self: “A set of questions make sense to us which turn around the meaning of life and which would not have been full understandable in earlier epochs. Moderns can anxiously doubt whether life has meaning, or wonder what its meaning is. However philosophers may be inclined to attack these formulations as vague or confused, the fact remains that we all have an immediate sense of what kind of worry is being articulated in these words.”) I won’t go into the specifics of Taylor’s argument, but suffice to say that he gives historical evidence and that I found his portrayal of meaning, or the emphasis on meaning, as recent and historically unprecedented, pretty convincing. And, really, I hadn’t ever before considered that the search for meaning might be anything other than timeless. What was I thinking?

This search for meaning, which Taylor speaks of a “quest” or a “hunger,” is tied to notions of belief allowed to flourish by developments in axial religions, finally culminating (for Taylor) in the idea of religion as being “optional” (i.e., a choice). Whether you believe that Christ is your savior or that Christianity as a whole is utter hogwash—that’s on you, and it depends on how you personally go about constructing meaning, or finding meaning, in your life. The notion that you can only find meaning through a religious affirmation now seems wrong to most of us (even to those religious folks who think that the true meaning is only to be found in God or some higher spirit). And this is what we mean by religious toleration—individuals are allowed to find meaning where they choose to find it. Sometimes we think of this form of religious toleration as an openness, and yet Taylor is suggesting that there’s an inherent requirement, an overarching quest that we take for granted. Whether this quest is religious in nature is beside the point.

According to this conception, humans are wanting. The world as it exists in its immanence (whether or not a transcendent realm is acknowledged) is insufficient, incomplete, a work-in-progress, and most of us think of something akin to a universal good (discovered through nature or reason) that is beyond simple human flourishing. Human flourishing is either not enough, or it can only happen if it is connected in some way to a universal good, a purpose, or a meaning; it requires a scaffolding that allows it to exist, rather than simply existing or being there for the taking. How should one lead a life? How should we, as a collective body (a nation, say), pursue political arrangements for the greater good? How to lead a life is no longer given; it is required that we figure it out—in short, that we invent our own solutions to the problem of falling short of what we want. We are all seeking a better life, and that better life is here, now.

I’m suggesting that this way of seeing the world, or what we might designate as our worldly situation or predicament, is in no way natural. Andre Cloots, in an essay entitled “Christianity, Incarnation and Disenchantment,” defines what he calls “the Christian solution” as a “combination of these two affirmations—the ontological completeness of the world on the one hand, and the world not simply being the expression of the divine will but needing salvation, on the other.” (In this scenario, salvation still looms.) Cloots doesn’t quite say this, but he suggests that we are now living in a very, very strange and uncomfortable place: “the culmination of an instability brought in by the axial religions between world-negation and world-affirmation…. [that] consists not in purely accepting the world as it is, nor in just turning away from it, but rather in investing in the world in order to make it better.” The conclusion that Cloots comes to, which I think is most interesting, is that a religious justification for our investing in the world is no longer necessary. We can seek a better life, and find meaning in life, without God. And Taylor’s overall point in A Secular Age is that this notion, wherein religious affirmation is optional, is historically unprecedented.

What does this mean? On the one hand, it means that religion, or religious justification, is either beside the point or best set aside, in a sort of Nietzschean burst of courage or maturity; and yet it also means that “investing in the world” doesn’t require an overtly religious point of view, even if the idea of investment (in terms of personal meaning or political preferences) obviously has a religious origin. Contrary to Nietzsche, the difference between things religious and antireligious becomes insignificant—a sideshow. This isn’t to say that religion wins out, and that we can’t get away from it (the impossibility of getting beyond what Nietzsche calls “substitute religions”), but simply that we live in an historical era in which the effort to “get away” bespeaks a higher, more overreaching agenda, and that agenda seems to us like the air we breathe. We invest in the world in order to improve it, and this strikes us as a nonnegotiable, innocuous (or at least unimpeachable) way of going about life. What other way is there?

My point is exactly that: there is another way. There is always another way. (Easier to see in the past than the future, but obviously an ignorance of the past makes it especially difficult.) This is what we mean by speaking of contingency, but sometimes we don’t apply the concept as fully as we like to think. There’s a story here, and it is contingent—that is, it isn’t inevitable, even when it feels that way. We are in the midst of this story—we know that much. This story doesn’t have a purpose or an ending, but it does often involve a sort of (necessary) blindness to our assumptions. “Investing in the world in order to improve it” is our solution to an historical predicament that has everything to do with the rise and development of axial religion, and particularly its Christian variant. As Cloots puts it, “We hear about improving the world every day, but we rarely realize the exceptionality of such a way of thinking. Just like we rarely realize the exceptionality of the doctrine of Incarnation.”

Standard
Daniel Hayes

Conjectural Groping

It’s easy, when it comes to “secularization,” to get caught up in a neat narrative. Obviously this is one of the pitfalls of “theory.” If there is a historical progression, and this progression is not inevitable or aimed in some predetermined direction, it still seems reasonable to say that it possesses a logic of progression–that is, one thing leading to another. But, to use an analogy, orange doesn’t turn suddenly into brown. Things take time, and so-called developments are a matter of fits and starts. Zigzaggy is the typical pattern of things moving forward.

This is of course the way that modernization theory accounts for everything, too, no matter evidence that calls into question the very notion of modernization. If some cultures and nations seem to reflect more primitive or backward arrangements, that’s only because they’re in the business of dragging their feet; in time, they, too, will come to see the good logic of modernity. But this is not quite what I’m arguing. This type of modernization theory assumes an endpoint (modernity, democracy, individual rights, free speech, a full-fledged market economy). For me, the future is much more interesting—more open-ended, unknown, and certainly not aimed toward any kind of endpoint. I’m just suggesting that it’s possible to look at history and see patterns that, in retrospect at least, seem to have a certain, directional logic.

I bring this up in thinking about the role of belief in religion and modernity. The axial revolution created, as it were, an opportunity for a new emphasis on belief and the role of choice, but it took a long while for this to gain traction. Charles Taylor describes this in speaking of what religion once was and is no longer—that is, “something you receive, rather than something which is the result of a quest or a hunger, something in which you stand, rather than something of your deliberate choosing, something societal rather than something individual, something at the service of God rather than something at the service of man, full dispossession rather than full self-possession.” The oddity of Taylor’s position, which in this case is also Marcel Gauchet’s, is that this description of the effects of “secularization” amount not so much to society-after-religion as much as religion-after-religion; the transformation that Taylor is speaking about happens within religion, even though it seems to describe aspects of secularization. And—here’s the messiness, the zigzagging—you can look at modern Christianity, and particularly its more evangelical variants, and see aspects of the past: an emphasis on something given by God and then received by those name themselves believers; and a positive attitude toward choosing, as long as the choice is made once and for all and followed then by an obedience that bristles at the notion of “self-possession.” Nevertheless, I think Taylor is on to something in his discussion of belief—a deliberate choosing—as a particularly modern conception of religion.

Some might claim that this choice is, in fact, the beginning of religion as we know it—that is, a distant, somewhat inscrutable God who might be ignored or not, depending on the inclinations of individual human beings. Or, to add to the complexity, there are even some Christians who think that religion in this sense (an option among many options) is the real culprit. Wilfred Cantwell Smith (1916-2000) is a theologian who wrote extensively on the issue of belief and came to the conclusion that, paradoxically, the surge of “religious belief” (with the option of being religious or not) is connected to a depressingly decreasing religious practice. I came across Smith in an interesting article by Walter Van Herck, who explains that at one point in Christian theology (Smith’s golden age) “the utterance ‘I believe in God’ testified to…loyalty and to the firm resolution not to betray God. The confession ‘I believe in God’ is therefore directly against the possibility of sin and against the possibility of unfaithfulness and betrayal. This confession is not directed against the possibility that God doesn’t exist or would not exist.” In other words, according to Smith, at one time to believe meant to trust, or trust in. It’s a subtle difference, maybe, but you feel it in the fact that a certain (antiquated) usage of belief made it contradictory to claim that you believed someone’s utterances and at the same time didn’t trust them in a larger sense.

Van Herck likens a prior conception of religion (Christianity, included) to a mother tongue. No one believes in English or French; it is simply, in the words of Taylor, “something in which you stand.” As Van Herck puts it, “This linguistic practice is not based on any antecedently acquired insights.” But now, over time, being religious means something different—that is, holding a number of opinions about, first, the existence of God, and then about the best way of living based on an individual quest that involves meditation, reading and conversation. Religion becomes, thus, “the personal selection of appealing ideas.”

This seems quite natural for us nowadays—and we pride ourselves on allowing everyone to come to their own conclusions, to choose whatever ideas they find appealing—and yet it’s instructive to see how far we’ve come not only from pre-axial assumptions (where religion is embedded and irresistible) but from Christian ideas of trusting God and submitting to God, however appealing or not the consequences. For Smith, this new notion of belief amounts to heresy—what is a heretic, after all, but someone who tinkers with their own religion? As Van Herck puts it, using the words of the Dominican, Jan H. Walgrave: “Belief is no conjectural groping, but the pre-possession of the beatific vision.” Well, not any longer.

Famously, the Reformation gave license to such “conjectural groping.” Everyone gets to the read the Bible, and everyone gets to choose their manner of interpreting the meaning of the Bible. In the rhetoric of contemporary Christianity, everyone can, if they choose, have “a personal relationship with Jesus.” Elements of ritual (seen by Taylor as leftovers of a pre-axial disposition) are ditched, and what comes to the fore are belief systems that involve a decision to (1) believe in God, and (2) believe that Christ died for our sins. As Taylor puts it, “The point of declaring that salvation comes through faith was radically to devalue ritual and external practice in favor of inner acknowledgement of Christ as savior.” In other words, religion becomes personal—very personal. “One is not simply a member in virtue of birth but…by answering a personal call. This understanding in turn helped to give force to a conception of society as founded on covenant and hence as ultimately constituted by the decision of free individuals.”

In quoting Taylor here, and suggesting that he’s on to something, I’m obviously tipping my hand. But I do feel myself swayed by this argument, and by the obvious connection between developments within Christianity (history’s big winner) and modern conceptions of freedom, individualism, democracy and the (natural) assumptions of a humanistic point of view. In the words of Gauchet, “The greater the gods, the freer humans are.” Not that God must go away entirely. You only need to think of the role of Deism in the imagination of early America to see the new possibilities. As the Declaration of Independence puts it, “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” Here God is, as Taylor puts it, “the artificer of the the immanent order.” And it’s easy to imagine the next step, where the transcendent realm is no longer necessary at all. “To secure these rights,” Jefferson and the others go on, “Governments are instituted among Men, deriving their just powers from the consent of the governed.” In other words, our job is to do the best we can to pursue a way of life that is in keeping with God—whether he exists or not. In this conception, thinking more philosophically, a good life might involve figuring out how to behave as if there were a God.

Standard
Daniel Hayes

A New Ambiguity

Christianity is unnecessary. I keep trying to remember that. It’s silly to think that Christianity was somehow historically inevitable. It may seem that way looking back, two thousand years hence, and perhaps it’s useful to imagine its inevitability and then draw up the reasons why it came into being and so flourished; but I still prefer to think of it as almost an accident—some crazy motherfucker who claims that he’s the Son of God, and somehow the claim gets traction. A lot of traction! Who could’ve guessed? (What Jesus actually claimed, during his life, is in fact open to much historical dispute; and so it may be that what people claimed him to have claimed is the more important factor in the rise of Christianity. Either way, it seems like something akin to a religious miracle in retrospect.)

From the point of view of someone trying to understand the axial revolution, and how its duality (between things transcendent and immanent) might be fodder for further developments, intensifications, or interpretations of this split, nothing seems as important as the Christian concept of the Incarnation. If axial religions suggested a God who was far away, distanced from human affairs of the everyday variety (no embedded spirits, and a new suspicion of magic and superstition), the story told in the New Testament proposed an interesting bridge between the divine and the human. We can quibble about the theological niceties, but the general idea seems clear: God takes human form, God becomes man in the person of Jesus Christ. God so loved the world that He sacrificed his only Son so that we might overcome the distance between here and there—a distance first created by the axial revolution. Jesus poses himself as a solution to this problem of the divine becoming more separated from the world, with a world more separated from God. And best to remember that in a pre-axial view of things, none of this makes sense—the world is divine.

Also, I think it’s crucial to point out the relatively modest station of Jesus—a man who was, after all, to become “King of Kings.” As Andreas Michel explains it, quoting Marcel Gauchet, “The mutual exteriority of the two worlds is represented in the disparity between Christ’s lowly birth and his Incarnation of the highest otherness (God’s transcendence). When God becomes human in Christ—a man at the bottom of the social ladder—he incarnates himself, against all tradition, in the opposite of all early hierarchy….For here the Incarnation must not be ‘understood in terms of the political logic of higher and lower, but in terms of a purely metaphysical logic of otherness.’ Here for the first time in history, political logic (hierarchy) and metaphysical logic (immanence/transcendence) drift apart.” When Jesus says, “My Kingdom is not of this world,” he’s suggesting something entirely new, in terms of politics but also in terms of human existence—a dual life, as it were, where we might live in the world but be not of the world.

Christians have forever been toiling to understand exactly what that means—this new equivocal nature of being a fallen creature, or of beginning as a fallen creature in pursuit of something else. In short, there is a tension here between living in the world, in the here-and-now, and living according to the dictates of another, heavenly realm. And it is this tension—axial in nature, and supremely represented in the Incarnation—that so interests interpreters of secularization, and particularly Gauchet. In summing up Gauchet’s position, Andre Cloots writes, “The true originality of the relation to the world established by Christianity lies in its ‘axiomized ambiguity’ or ‘ambiguity in principle’….The combination of these two affirmations—the ontological completeness of the world on the one hand, and the world not simply being the expression of the divine will but needing salvation, on the other—made possible the ‘specific Christianity solution’…a culmination of an instability brought in by the axial religions between world-negation and world-affirmation. It consists not in purely accepting the world as it is, nor in just turning away from it, but rather in investing in the world in order to make it better, as a religious duty, in the name of God.”

And so I want to ask, following this logic, what is the nature of this investment? (The world is clearly insufficient as it is.) But also I’m wondering, in terms of religious belief, about the exact parameters of this “salvation,” or by what mechanism one can get right with God. It hardly seems automatic, and it clearly doesn’t have anything to do with earthly designations, either political or ethnic: “There is neither Jew nor Greek, there is neither slave nor free, there is no male and female, for you are all one in Christ Jesus” (Galatians 3:28). It seems more like a private, individual decision—a choice.

Standard
Daniel Hayes

And So Now What?

[I need to do a little homework—give some background. And so here I’m less asserting things as much as I’m trying to tell a story. Whether this story is true or not is of course open to question. And certainly the story is much more complicated than I’m going to make it out to be. But I want to get down the general outlines of what some call “the axial revolution.” And I will also quote occasionally from Radical Secularization?—a collection of essays that I found most useful in understanding shifts in religious thought.]

I’m reading Karen Armstrong’s The Great Transformation: The Beginning of Our Religious Traditions, to help me better understand what are called axial religions. Already, beginning as she does with more so-called primitive religions, I’m getting the picture of how God or the gods or deities or spirits could actually be embedded in daily life, existing in objects, and so rendering our distinction between higher and lower, transcendent and immanent, largely beside the point. The axial revolution has to do with monotheism, yes, but more interestingly with a God who goes thataway—farther away from us, into another realm. In other words, to put it simplistically, two realms came to appear where once there was only one (even if that one was a very complicated mashup). Marcel Gauchet calls this new religion, after the axial revolution (which she locates between 900 and 200BC, whereas Karl Jaspers, who coined the term axial, dated it slightly differently), “transcendent religion.” A part of this transcendence obviously has to do with God, who exists in his own realm, but there is also for the first time a distinction between ordinary humans and those who can, for one reason or another, make better contact with this realm beyond the human. (There was, according to this idea, an egalitarianism gradually replaced by a hierarchy—and a corresponding politics.) God becomes larger, more monotheistically powerful, but has also takes on a certain unknowableness—and hence the quest to know him, make contact with him, and to figure out exactly what he wants.

Andreas Michel explains this new predicament in this way: “For human beings, the result of this revolution in transcendence is that they are now completely separated from the source of the divine, unlike before when gods dwelt among them.” That’s the bad news. The good news has to do with the various possibilities of reaching God, of knowing Him—and so here is introduced a kind of striving that is probably easiest for us to understand by thinking of a quest (for absolution, for higher meaning, for good standing in the eyes of God). We live in one world, the profane world, and we strive to gain access to the other world, the sacred (transcendent) world. In this sense, theology becomes possible because the transcendent must be interpreted. Gauchet calls this “speculation about the absent.”

At bottom, then, a duality is introduced in human history—the sacred and the profane, the invisible and the visible, the transcendent and the immanent, this world and the other world. Consequently, according to this idea of an axial revolution, there is for the first time the idea of a universal good that goes beyond the idea of human flourishing. Such notions of flourishing are no longer good enough. Meaning is not to found in simply living daily life (with its stumbling upon deities at every turn), or looking to the past, but in seeking a higher life, a better life. Laurens ten Kate has an interesting way of putting this: “The axial God is always the invisible God, he who is by not being or by being beyond being.” Being is no longer itself sufficient. And in this picture of the world, God is now an outsider, an intruder, a specter in history. God becomes a dimension beyond the world, outside of the world, as opposed to the gods of more primitive religions who existed in the midst of humans.

This is scary stuff, I think. It definitely involves a shift in how human beings think of themselves. It might be useful to think of this shift in terms of ownership. After all, who owns the world, or what is now the world that might be owned? The world does not own us—that would be a return to the non-axial experience of the world; and yet we don’t own the world, not the way that God might own the world. We are somehow caught in the middle—neither owned nor owning. There is something downright uncomfortable about this predicament, at least in contrast to what life must’ve meant for people who lived in pre-axial days. Taylor speaks of those people of more primitive religions, or folks nowadays living in the aboriginal world, as possessing a “mood of assent,” not feeling the kind of “quarrel with life” that comes from living from an axial point of view, suffering a duality of existence that might leave a human being feeling unmoored or caught in the middle.

And yet, on the other hand, it’s easy to see a kind of freedom here—a space that opens up where before it didn’t exist. It doesn’t make sense to think of pre-axial peoples as less free, since the possibility of this freedom didn’t even exist. The new duality is what creates a space whereby we can make our own path, create our own ways to climb the ladder to a higher, better place. Where we stand is not where we want to stand. And so now what?

Standard
Daniel Hayes

Misguided

In the public arena, there’s been on ongoing battle between atheists (Dawkins, Harris, Hitchens, Dennett) and those who support some form of religiosity or a belief in a higher being (ranging from hardcore evangelicals to new-age spiritualists). As usual, there’s pretty much a total disconnect between this public squabble about God and a set of much more interesting developments in scholarly work, in so-called secularization theory. (For those of us who live in the in-between, it can be very, very confusing.) In the public imagination, the category secular is generally accepted and easily defined, and so the debate is simply over how far this category should extend into the public arena—to the extent of stifling, or making difficult, private beliefs that are resolutely not secular. That the secular is in tension with the religious, or that it exists as its opposite, is taken for granted. Meanwhile, in the academic world, the relationship between the secular and the religious—and particularly in the instance of Christianity, history’s big winner—has seen an utter transformation. While prominent secularists in the public sphere have asserted an either/or proposition, many academics and intellectuals have constructed a messier, more complex picture of things, going so far as to claim that modern secularism, in whatever flavor, is nothing other than an offspring of Christianity.

The important books behind this intellectual transformation are not always recent. Karl Lowith’s Meaning in History: The Theological Implications of the Philosophy of History, a key text in support of seeing a continuum between Christianity and secularism, appeared in 1949. Marcel Gauchet’s The Disenchantment of the World, building on Lowith and delving deeper into theological matters, particularly the Incarnation, appeared in 1985 (though not translated into English until 1999). The seeds of a development of a new turn in secularization theory were present, but it took time for them to germinate. And then Charles Taylor created a firestorm in 2007 with A Secular Age, which made an exhaustive case for a more nuanced story of the Christian origins of the secular, replacing previously more simplistic accounts of modernization, secular progress, and the obsolescence of religion. Instead of seeing secularization as a “natural” or inevitable process, or of seeing “the secular” as a neutral, worldly environment left in the absence of religion, there developed a loose consensus (with plenty of dissenters, of course) that saw secularization as an outgrowth, if not a flowering, of Christianity. You might even say “a natural outgrowth,” but that would take away what’s most interesting, particularly in Gauchet—the wild contingency involved in the story of how a religious fanatic changed history and planted the seeds for everything we accept today as a bunch of pretty good (distinctly secular) ideas: individual choice, democracy, human rights, self-determination, etc.

On a personal level, I’ve found myself increasingly persuaded by this argument about the history of secularization. At bottom, this narrative strikes me as superior to what Taylor refers to as “the subtraction theory” of secularization—whereby we shake off superstitious ideas, transcendent aspirations, and bad science, only to see the world for what it is. Some might think of Taylor’s subtraction theory as a straw man—too simplistic a picture of what most people consider the true story; but for me it very aptly describes my own assumptions: God talk is mumbo-jumbo, Christianity has outlasted its usefulness, and so we are left in a new, godless world that demands of us a new way of thinking. (This is essentially a Nietzschean vision of things, with the future as being anything-but-Christian, even if the solution to nihilism is still up in the air.) It makes me cringe a little to admit it now, but I think this picture of things is, in a more general sense, linked to an unexciting but sober theory of modernization that I picked up in the late 70s and never completely set aside. Yes, this tale of inevitable change had obvious political problems (e.g., colonialism, capitalism, the messiness of globalization), not to mention ontological problems (e.g., eschatological notions of “progress”), but some kind of process was clearly occurring and it was tied to increasing secularization, which was tied to a lessening of religion (however rocky the road) that saw Christian assumptions going the way of the Edsel. Or so we hoped.

This explanation of recent history now seems to me very misguided.

Standard
Daniel Hayes

Curiosity

[Note: After a hiatus, I’ve returned to heavy reading. Luckily, my reading has shaken my senses, stirred my thoughts. (This rarely happens, I think.) And so I hope to write a series of short meditations on the secular, secularism, secularization, and religion—somewhat reflecting the way my mind and opinions have lately been altered, even if subtly. I begin here.]

I’ve been reading Stathis Gourgouris’s “Lessons in Secular Criticism” and finding myself annoyed. Or at least realizing what matters to me most. Gourgouris’s book is a diatribe against recent developments in theories of the secular, which he considers politically retrograde (i.e., conservative in spirit, and certainly so in terms of their implications). I realize that this way of thinking—teasing out the implications, and then damning the theories according to political preference—goes against my genuine (though perhaps somewhat naïve) desire to figure out exactly what the fuck has transpired over the last two thousand plus years in human history. (It’s a big question, I know, but what else is there to think about?) In other words, theories of secularization—what this phenomena means, and what it means about so-called religion—are tantamount to stories: that is, attempts at describing a contingent, historical process. I realize that these stories inevitably betray political points of view and don’t appear out of nowhere, without ideological context (which is fascinating in itself, in terms of intellectual fashion); and yet I’m not going to pick out what story is most compelling to me based on what seems to me a straightforward, somewhat ham-fisted political calculation of its overall effects. I’m trying to figure out what happened, and why it happened the way it happened. Where that leaves me—in terms of personal belief, and what political positions I take—is of secondary concern. Again, this approach may be naïve, but I’m not sure what other choice I have if I’m keeping true to the elemental curiosity that makes me ask the question in the first place.

Standard
sources of moral convictions

Spin

Just to continue the current line of speculation.  If we live in the James/Rorty universe, there is no clear dividing line between justification and spin.  It is all just persuasive speech, meant either to persuade myself or to persuade others.  Or, to put it in a slightly different vocabulary, there is no clear divide between “reason” (legitimate grounds for believing something) and “unreason” (illegitimate grounds).

But that doesn’t make “asking for and giving reasons” utterly pointless.  There are benefits to trying to articulate for oneself and to others why I believe what I do believe.  In some cases, the exercise will lead me to change my beliefs; in other cases, it will lead others to change their beliefs.  Persuasion does happen.  We just don’t have any surefire way to say I was persuaded in this case by reason, in that case by emotions.

Disgust at homosexual behavior is less prevalent in 2015 American than it was in 1975 America.  Daniel and I think that is a good thing.  Daniel is worried about the hubris involved in calling that change “progress.”  After all, we just take the belief we think is better and then label any movement toward that belief prevailing as progress.  There’s no independent standard for progress in that case.  I think it’s not that dire.  We can offer a story, offer reasons, why things are better when homosexuality is not demonized.  Pointing to social goods like peace, conviviality, inclusion.  Of course, that still depends on people generally valuing peace, conviviality, inclusion.  You can’t get total agreement here, especially if there is no agreement on some basic underlying goods.  In any case, I’m willing to argue that some things are progress.  But I see the force of Daniel’s suspicion of the label “progress”–and I am not willing to claim that something called “reason” provides us with a standard for progress.  Rather, progress is measured in relation to concrete goods.  And our allegiance to those goods is not, it seems to me, a product of reason.  Kant was just wrong about that, as is Habermas.

Standard
Daniel Hayes

My Head, Spun

I hadn’t quite meant to say—in a general sense—that a moral objection to X (in this case, homosexuality) comes first, as almost visceral opinion, and only then comes the justification (i.e., reasons, argumentation, intellectualization). I was speaking only of my dislike of Christian commentators who act as though they’re always struggling to find the good and the right, and the good and the right always seems (just coincidentally) in line with American conservative values of the Republican sort.

But yes, sure, going a step further (and following John’s lead), perhaps this is a general tendency (or even a truth!), and another way that Taylor has forced me to rethink why we think what we think, and why we think we have better reasons for thinking what we think than others (even when, maybe, we don’t). What we seem to do, on a regular basis, is act as though moral and political choices are there for the taking—the smorgasbord theory of moral/political action. Some people (enlightened folks) pick right, some people (stuck in the past) pick wrong. Not noticing, as Taylor would point out, that the so-called choices change (is the matter of having a slave, or taking a stance on slavery, really a current “choice”? should we be patting ourselves on the back for our enlightened views of slavery?); but also not noticing that even our carefully wrought decisions on current choices are hardly made through some straightforward process of intellectual deliberation.

One argument might go like this: “Well, okay, John is right to speak of the sway of ‘inarticulate feelings,’ but that still leaves over our choices of what to do with those inarticulate feelings.” Or, in other words, even if choice isn’t quite as far-reaching as we might like to think, it can still be the determining factor. But this seems just a way of kicking the can further down the road. (And holding onto our pride as we kick.) It still makes it difficult to fault someone who, first and foremost, simply thinks of homosexuals as disgusting, and then fashions arguments against same-sex marriage. Or, more precisely (since I find it pretty easy myself to fault these bigots), it makes it difficult to fully justify finding fault.

None of this—this questioning of justification—makes me feel comfortable. I mean, what’s the point of arguing about things, putting together opinions and presenting them to others, if you don’t think you’re right and they’re wrong for reasons that go beyond whim? (Whether you wish to execute your moral opponents for disagreeing with you—that’s another matter. Humility has its social benefits.) It seems we have nothing but justification, however contingent it is on history, culture, and our particular upbringing. And so we return, in a circle, to the same point of view: you should buck up and get over the disgust you feel for homosexuals, perhaps by learning more about sexuality and questioning some of those suspicious religious beliefs you have.

The whole thing—this circle—leaves my head spinning. As does the question of why John McGowan turned out different than Antonia Scalia. I wish to claim ignorance on these matters and climb back into my shell.

Standard
sources of moral convictions

Justification of Moral Beliefs

Another short comment spurred by something Daniel has written.  He says (I am paraphrasing) that it often seems as if the objection to homosexuality comes first and then comes the appeal to religion as a justification of that objection.  In other words, the objection is not motivated by the religious belief that is then claimed as its foundational justification.

I have been worrying about this issue for some time, without being able to come up with a good way to talk about it.  Basically, I think a certain way of reading William James (and perhaps of reading Rorty) leads to the anti-intellectual conclusion that our moral beliefs are based on fairly inarticulate feelings (of disgust, aversion, approbation, admiration, fear, delight) that we only secondarily justify by “giving reasons.”  Thus, in an example similar to Daniel’s, we have Christians who oppose capital punishment and those who support it.  Both sides have theological arguments they can present to explain why they take the stance they do.

But are those theological arguments the source of their convictions?  I am skeptical.

But I am also uneasy with just saying that one’s moral convictions simply emanate from a black box we are going to call “feelings,” or “intuitions,” or “sensibility.”  Those vague terms say just about nothing at all; they just push the problem down one level.  I think capital punishment is wrong because I “feel” that it is wrong?  A claim expressed in those terms is just two card monte.

Of course, this is also where “tradition” or “culture” is often rolled into place in order to explain how an individual got those feelings or intuitions.  But I have expressed already (in previous posts) my uneasiness with the appeal to tradition or culture.  Do we have a story that explains why I, raised a strict Catholic, do not now believe in God whereas lots of people raised Catholic do believe in God? Does such a story exist, one that can really do the explanatory job?  What explains the differences in values and moral convictions between me and Antonia Scalia (whose upbringing was remarkably similar to mine)?

Standard
Uncategorized

Choice and Tradition

Daniel is raising such thorny and (yes!) interesting issues that I feel better taking up small bits than trying to chew on it all at one go.

So here’s some thoughts (and some questions) about tradition.  One fairly common way to describe “modernity” is to claim that modern humans (for a variety of reasons) are more likely to encounter other humans who live by values and in practices fairly different than one’s own.  In other words, a modern person is acutely aware that their are multiple traditions, multiple cultures.  And that makes modern people more self-conscious about their own tradition–and about whether it is justifiable and about whether it is preferable to some other tradition.  Ethnocentrism is a concept that only becomes possible when at least one acknowledges that there are cultures different from one’s own.

We need not accept this description of the modern; we might argue that humans all the way back were always encountering other tribes and were always acutely conscious of the differences between “us” and “them.”  I have nothing invested in claiming “modern” humans are different in this particular way (or in any other for that matter) from “pre-modern” humans.  On the whole, I am mildly skeptical that the term “modern” designates much of anything. It’s a structuring term for lots of thinking, but may not actually point out anything “real.” (Daniel talks about Taylor’s desire to “get it right.” “Modern” may be a term that is a serious obstacle to getting it right.)

But here comes the kicker. The really questionable (and I am sure, to Daniel, objectionable) move is to say that this modern self-consciousness enables the modern person to gain some kind of critical distance from her own tradition. Once my tradition is juxtaposed to another tradition, I am better able to understand that my tradition is not self-evident and, in fact, stands in need of justification. Why should I follow my tradition when that guy over there follows his?

This, to me, is the key point. Is an individual in a position to evaluate her own tradition? And doesn’t the ability to evaluate also entail the ability to accept some parts of that tradition and reject other parts? Must one automatically live out the prejudices and practices of one’s tradition? Or is there some place from which to make choices to do this, but not do that? And if (as I do) one believes that some choices are possible, how to explain how that possibility arises?  I am hardly claiming to have a good account of the fact that one can make choices.

A final note in a somewhat different key. The coherence of tradition also poses big problems. All our traditions appear highly heterogeneous. So, as my last post suggested, even if we talk about a Christian tradition, there are so many different ways of being Christian (Daniel’s liberal, conservative, and crazy just one way of parsing those differences), that it is very possible to become skeptical about what work the term “tradition” can do. If a person is raised Baptist, but then converts to Catholicism, has she made a choice? Has she changed traditions? Will her new self be some kind of hybrid between beliefs ingrained in her since birth and her attempted embrace (sometime after childhood) of a new set of beliefs? You can see the puzzles.

Standard
Uncategorized

Bad History

I am behind, obviously.  Daniel has given me lots and lots to chew on.  But right now I just want to make one small intervention.  Bruni is guilty of straight-forward historical ignorance.  Christians in the 2nd century AD did not condemn homosexuality.  (John Boswell is the historian who has documented this stuff.)  Christianity has a long and complex history, not to mention the fact that it has also always come in a variety of sects. (There were countless heretics before there were Protestants.  Medieval Catholicism was only slight more, if at all, coherent than current Christianity, with its 50 varieties of Protestantism.) There is no set of beliefs or codes that is Christian.  There is, instead, a family of beliefs, prohibitions, and sensibilities that have been labeled Christian, and they have been constantly morphing and changing.  Anyone who takes what so-called conservative Christians believe today and thinks those beliefs somehow distill the essence of what Christians have believed and practiced over the long history of Christianity quite simply doesn’t know squat.  That’s just bad history, meaning ignorance parading as knowledge.

Standard
Daniel Hayes

How It Works (Part 2)

I tried in my previous “Primer” to describe contemporary Christian sensibilities on the question of homosexuality (with sex-same marriage as the hot-button issue). I wasn’t quite putting myself in the shoes of, say, conservative Christians; but I was trying to avoid the easy, polemical attributions of outsiders.

So much for that. Now I’ll reveal my true colors. When I read Christian conservatives on gay marriage, or even more so on the general issue of homosexuality, I find myself angry. Again, I’m not talking here about the crazies but about Christians with some sophistication and theological depth. I read their blogs and I think, “They are conservatives first, and Christians second.” (Believers would chafe at such an assessment.) Their arguments seem strained, and elements of simple bigotry slip in way too often. And when I read someone like David Brooks (in “Religious Liberty and Equality,” in the New York Times), trying as he does to reasonably suggest that opposition to the original Indiana law is wrong-minded and counterproductive, I want to tear off his glasses and throttle him. Do I have any sympathy for religious folk not wanting to participate in gay marriages? As far as I know, there aren’t any laws that make people go to weddings. I suppose I have sympathy for churches that don’t wish to have their so-called sacred facilities used for the purpose. But these knucklehead caterers, these moralistic florists? No, that’s simple discrimination. Squirt out the words on the top of the cake, take your money, and go home and feed your family.

And yet I read Frank Bruni’s New York Times opinion piece, “Bigotry, the Bible and the Lessons of Indiana,” and I can’t stop shaking my head, disagreeing with its overall story of how we’ve gotten to where we’ve gotten, and its bald-faced plea for all Christians to get on board the ship to history’s increasingly better future. There are things about Bruni’s piece that I do like. It is, on the surface, a refreshingly bold stance on the topic of gay marriage and contemporary Christianity; it takes a position that many Christians, mostly liberal, have taken, and it shows that it’s quite possible to be a Christian who supports gay marriage and sees homosexuality as just another option. Or better, Christians might even be on the forefront—speaking of love, its many dimensions, its variety of expressions, and so on. (What about gay sex, or straight sex, that doesn’t involve love? What about people who aren’t interested in “loving relationships,” which seems always the ticket to acceptance? Okay…I’m getting off-topic.) Bruni does a nice job of describing how any “good Christian” can get there with a little finesse. As he writes, “Homosexuality and Christianity don’t have to be in conflict.” Not anymore.

But it’s the not anymore that I find troubling. Bruni tells a progress story, and he’s asking Christians to be, in this sense, progressive in their thinking. What bothers me isn’t the assumption that progress is good, but that it actually exists—an entity, a force, a causal agency of history. He refers to “beliefs ossified over centuries,” and how Christians can escape these ossified ways of thinking by “rightly bowing to the enlightenment of modernity.” (Oh that—again!) In other words, current circumstances—that is, benefiting from what Bruni refers to as “the advances of science and knowledge”—make it possible to see things for what they are, “freeing religions and religious people from prejudices that they needn’t cling to and can indeed jettison.” These good folks, freed from ignorance, can now “hold an evolved sense of right and wrong.” Once this wasn’t possible, but now it is. As Bruni explains, for example, “there wasn’t any awareness back then that same-sex intimacy could be a fundamental part of a person’s identity.” We now know things that they didn’t know back then. And Bruni implies that the ignorant—those living back then, and even conservative Christians today—should be forgiven because, after all, anti-gay views are not so much a result “of hatred’s pull as of tradition’s sway.”

Tradition. This is a key word in the battle between liberals and conservatives—both now, in American politics, and in any attempt historically, theoretically, to distinguish between, say, a Locke and a Burke. What’s interesting about Charles Taylor is that he completely sidesteps this endlessly boring discussion of tradition, this loop of disagreement about whether tradition is good (because we build on the knowledge of our predecessors) or bad (because it stops us from making free choices, given new circumstances). Or at least Taylor suggests that the very idea that we might pick and choose, looking at traditions as items on some vast menu of possibilities, is a prejudice of modernity (and obviously a way that liberalism has captured even the minds of so-called conservatives). In other words, from an historical point of view, it’s naïve to think that we can pull levers and simply move our selves away from tradition and its sway, as though we were agents of our own destiny. (I take this not as an argument for conservatism but as a way of shining light on the shenanigans of liberalism).

For Bruni, tradition is to be questioned. Which seems reasonable. Less reasonable is the idea that tradition might be replaced by ideas that are simply devoid of the stuff of ignorance or prejudice. What I find disturbing is Bruni’s assumption not only that history is unfolding according to some accumulation of knowledge but that modernity itself is a moral achievement that allows an “evolved sense of right and wrong.” We are now free to act rightly, so goes the story, whereas once we were but the plaything of our own ignorance. As though tradition were a thing of the past and not simply a way of referring to the past. Are we not now in the business of establishing tradition? Do we simply “jettison” prejudices—knocking them out of our way, one by one, as we march toward history’s future? Are we seeing things clearly now—aware, finally, after all these years, that “same-sex intimacy [can] be a fundamental part of a person’s identity”—or are we simply seeing things differently, taking on a new viewpoint? (Bruni’s argument also assumes that a notion, such as “same-sex intimacy,” means anything without historical context. And ditto for “a person’s identity.” In other words, were folks in the seventeenth century ignorant of what might make up “a person’s identity,” or were they playing with a different deck of cards?)

Also, I think that Bruni’s view of religion—Christianity, in this case—is remarkably narrow. He sees it less as a worldview, with a long tradition, and more as a series of codes, less as a way of life and more as a bunch of isolated beliefs that may or may not fit together. And so the question becomes, Can one belief be squared with another? Can the conservative Christian believe in God’s goodness and also believe that homosexuals are no different than heterosexuals in the eyes of God, and no worse for their sexual choice? Why should this “difference” make any difference, especially given everything we now know about sexuality and its expression? But what if, as I outlined in my “Primer,” this difference isn’t entirely a matter of gay or straight? What if there are larger implications, having to do with natural order, relations between men and women, God’s vision of what a family should be? To me, all of this is nonsense, of course—but we should at least realize that a worldview is at stake here, not an isolated idea about sex that might easily be excised. In other words, the problem is much larger and much more intractable, at least among conservative Christians who would be forced to give up complementarian ideas that are at the core of their thinking.

Maybe Bruni’s piece would never sway a conservative Christian, but it no doubt has a rhetorical force that appeals to both nonbelievers and liberal Christians. One argument in favor of Bruni’s piece, and against what might be construed as mere nitpicking (by people like me), is that Bruni is being an activist, presenting a case and using the rhetorical tools of the time. If Taylor is right, and “subtraction stories” are all the fashion, then why should Bruni not rely on this particular subtraction story—where we realize the ignorance of our predecessors and jettison our prejudices? Why shouldn’t Bruni appeal to our better side, even if our better side is a self-serving conceit? I’m somewhat swayed by this argument in favor of Bruni, which of course reveals—here, at last!—my own confusion. And truly, this is really the reason behind my writing these two “How It Works” blog entries, and using the moral and political issue of gay marriage as an illustration of a real problem. Is there another way of arguing in favor of gay marriage, and against despicable attitudes toward homosexuals, without telling a feel-good story of modernity, without suggesting that history advances as we expunge moral prejudices, as we gain knowledge and courageously resist the sway of tradition? To tell you the truth, I’m not sure. And I’m hoping that John can help here, even if he doesn’t entirely agree with the way I’ve spelled things out.

I do want to say this: “This isn’t how it works.” That is, the story that Bruni tells about moral progress, about modernity and the advances of enlightened thinking, is wrong. And it is, from my perspective, a story that needs to get a whole lot less airplay. I agree with the political aspirations, and yet—please—the rhetorical scaffolding needs to be jettisoned. But if this is true, and if these simplistic visions of modernity are at best foolhardy, then how does a citizen argue for change, or any political position, without resorting to the foolhardy? Isn’t there a way to be smart about the mechanics of history and make the case for historical change? Am I continually to be switching hats, never feeling comfortable in either of my selves (academic and political)? Isn’t there a way to make an argument in favor of gay marriage, and against all discrimination against homosexuals, without resorting to fairy tales?

Standard
Daniel Hayes

Christianity, Homosexuality, and Complementarianism: A Primer

I think it’s best before I get to the second part of “How It Works” to deal with a few basics about the issue at hand. Eventually, I want to deal with Frank Bruni’s article in the New York Times, “Bigotry, the Bible and the Lessons of Indiana,” but I think this opinion piece only makes sense if we understand some of the key issues. And for once I think I’m qualified to discuss these issues, because (a) I used to be a believer, (b) I have a fairly good understanding of the Bible, particularly the New Testament, and (c) I’ve kept up with various theological controversies in Christianity, and particularly those that involve popular issues (e.g., homosexuality, politics). I read blogs, I spend too much time thinking about where I would stand if I hadn’t given up my faith, if I hadn’t come to the conclusion that Christianity, in whatever flavor, is a colossal waste of time. (And yes, I realize that the repeating of “time” in the previous sentence suggests a contradiction.) My interests are purely Protestant, though, and so the thoughts below don’t necessarily address Catholic concerns.

Anyway, here goes. I think of contemporary Christianity as appearing in three types: liberal, conservative, and crazy. People like you and me often conflate those last two, which I think is unfortunate and probably intellectually lazy. It’s easy to point to the crazies and laugh and then conclude that Christianity, at least in its American version, is a total farce. But the vast number of Christians in America are not crazy conservatives; their views may be reprehensible nonetheless, but these believers are not so easy to categorize and dismiss.

I don’t use the term “evangelical” because, contrary to common thinking, the term applies to a theological point of view has a long and rich history in America. To be evangelical is not necessarily to be politically conservative. Only recently has the term come to suggest this, to be used (by both believers and political commentators) in this way.

Liberal Christians generally think that men and women aren’t significantly different from one another; they should be afforded the same rights, and their roles (e.g., in the family) are somewhat interchangeable, at least in theory. Homosexuality is a trickier issue. Most liberals have been “struggling” with the issue, as they like to say, though this struggle is very quickly turning into a pretty firm position that sees homosexuality as a “choice,” not a “sin.” Accordingly, liberal Christians have become supporters of same-sex marriage. Brian McLaren, who is probably the most prominent liberal in America, serves as a good case-study: a lot of “struggling,” and then, after a time, he serves as the minister at his gay son’s wedding, and now, as you might guess, he’s totally on board. Things happen fast.

Conservative Christians aren’t so much on board. Some have moved in this direction, and hold their noses but accept homosexuality; more than a few may not condone homosexuality, but they think that same-sex marriage is here to stay and should be afforded legal rights and protections. But most conservative Christians don’t accept homosexuality, and find themselves struggling with the notion of sex-same marriage. (Should it be a legal right, does a good Christian condone a sin by attending such a wedding?) There has been an interesting and powerful movement in conservative circles to embrace homosexuals as no different than other sinners (e.g., drunkards, adulterers, gluttons), and there’s much discussion on conservative forums about whether homosexuality has been wrongly singled out. (The answer is usually “no,” but the question is at least being asked.) Yes, there’s something suspicious about this approach—”hate the sin, love the sinner”—but it nonetheless figures as a real change in conservative circles, where believers once simply spewed hatred for homosexuals without compunction.

The crazies, as you might imagine, simply spew and hate. And see same-sex marriage as the end of civilization as we know it. I don’t want to deny their influence, but I don’t think they’re a particularly interesting interlocutor.

From my point of view, I think it’s impossible to understand conservative Christians and their opposition to homosexuality, and particularly their view on same-sex marriage (which has become an obsession), without understanding the utter centrality of what’s called “complementarianism.” This term offers a positive spin on the idea that men and women are different, and they should have different roles, and that those roles are complementary. Are men and women “equal”? Of course, says the complementarian; they are just different, with different “gifts,” which imply different roles. Sometimes, in the harshest versions, this means that women should stay in the home, submit to their husband’s wishes, etc. Sometimes, in milder versions, it means that women have a place in the work world, in the larger social and political world; but women still have primary domestic responsibilities (e.g., the raising of children), and men are seen as suffering the burden of judiciously wielding ultimate authority over the family.

The crucial factor, which goes back to theological and biblical precedents, is always the question of the role of women in the church: Are they allowed to participate in leadership duties, can they preach from the pulpit? For liberals, this is not an issue (though liberal ministers or theologians will nonetheless find it necessary to speak of the issue of “complementarianism,” if only to say that it’s wrong-minded). If the roles in church administration are the same for men and women, then you can bet your bottom dollar that homosexuality will very soon be taken off the “sin list,” if it hasn’t already.

For conservatives, the problem with homosexuality—and particularly same-sex marriage—is that it gets the gender roles wrong. (This is why they often speak of sex-same marriage as a continuation of, or a logical conclusion to, the sexual revolution of the 1960s and its feminist facet.) I don’t think this is always acknowledged or properly understood. In other words, often the discussion (from afar, in the minds of unbelievers) has simply to with homosexuality being a sin, and the absurdity of that. (Christian believers don’t help with their reliance on particular verses of the Bible, as though their religion were a series of codes to be followed.) But what’s at stake, for conservative Christians, is nothing less than a picture of nature: what is a human being, and what is a man, what is a woman? In this scenario, homosexuality upsets everything. If a man has sex with a man, or a woman has sex with a woman, what does that make those men and women? Sinners, yes, but there are also implications for the very order of the world—the way God intended it. (There is also, of course, the issue of procreation—but the impossibility of offspring is only a sign of the fact that men and/or women are behaving in the way that God would not want them to behave.) Again, you could see this as just another way of saying that homosexuality is a sin. But what I’m trying to argue here is that this sinfulness only makes sense in the context of a larger worldview that involves a picture of how men and women function together in an ordered way.

For those of us who don’t believe in God, all of this seems silly. “God intended it”? Well, God doesn’t exist, and so nobody intended anything. Still, if we’re going to avoid the easy way out, and resist the temptation to pat ourselves on the back, we need to keep Taylor’s admonition about “subtraction stories” in mind. We have our own ideas about men and women, gay and straight, and these ideas are not simply the product of subtracting ignorance (i.e., conservative Christian ideas of gender roles). We, too, have a worldview and notions of “an ordered way.” To say that we don’t—to say that we simply want to bring down the barriers that obstruct the way things really are—seems to fall into the trap of what Taylor refers to as a naturalization of a product of history. Furthermore, anyone who has a daughter knows that these gender roles, so often the object of derision, are alive and well (and often secretly embraced). At the private school my daughter attends, in the most liberal city in America, mothers make up the vast bulk of volunteers during the day while their husbands work. Go figure.

Standard
Daniel Hayes

How It Works (Part 1)

Does God exist?

This question doesn’t really interest Charles Taylor. Here’s a question that does interest him: How did it come to be that belief in God, which a few centuries ago seemed axiomatic, now seems almost the opposite? Or: How did it come to be that nowadays, “even for the staunchest believer,” belief in God has become “one possibility among others”? Or: How did it come to be that we might agree that the question I just posed, in the preceding sentence, is an interesting question, whereas once it wouldn’t have made sense to talk of belief as a matter of “choice”?

One way of tackling this question takes on the form of what Taylor calls a “subtraction story.” I’d go even further and suggest that this is the only story in town, the one that almost everyone uses without even thinking about it. (In other words, this story is assumed in the same way that we assume that people have a “choice” to believe in God or not.) The secular, according to these subtraction stories, is simply what’s left over once you take away the religious, the superstitious, the supernatural, and associated, outmoded customs: you see things for what they are. You see existence nakedly. Or, being more modest, you’ve at least subtracted a bunch of obvious impediments to clear vision. The facts increasingly are put on the table; and, if you see what you see in front of you and still think that God exists, then that’s your choice, your right. If you believe in God, it’s just that you’ve done the math, you’ve executed a set of subtractions (a lessening of ignorance, we might say), and you’ve come to a different answer.

All of this (what I’m calling the only game in town) seems benign enough, even if the fallout for religious belief might be significant. After all, no one is subtracting everything, all at once; it’s an ongoing process, largely driven by scientific knowledge and its logical consequences (including its ethical consequences), and it’s a slow grind, even if at some point we can speak of the appearance of “a secular age.” Things change, the train moves forward. And yet Taylor’s argument is that this way of thinking (about religion, about modernity) is anything but benign, and hardly scientific or even logical. This story, from Taylor’s point of view, doesn’t add up. It’s a story of progressive secularization that’s driven by a master narrative that hides in the wings, with most of us taking it on faith, nodding our heads as we watch things get better and better, if haltingly. (Hence, the irony: that faith.)

To be clear, Taylor isn’t opposed to master narratives of modernity. His book presents one. He just wants to get it right. He wants to initiate a discussion about how we’ve gotten to where we’ve gotten, and what that implies for the future. Taylor wants as much as possible to shine light on powerful, competing master narratives in order that we might realize our assumptions, take them into account, and see how things really work (even if modest about our ability to do that). In a separate essay, Taylor describes a bad narrative of modernity in this way: “On this construal, the essential character [of human beings] was always there, but previously it was impeded by factors that have since been removed.…An example of this…is the picture of human agents as essentially individuals operating by instrumental reason. In the past, this tendency was held in by illusory religious or metaphysical views or by tight community mores.” This is the “subtraction theory” of modernity, though Taylor also links it to a slightly different, more forceful conception of historical change, which he calls the “breakthrough theory.” Natural forms of life and ways of thinking may have been “held in check in earlier social forms,” but now a given “universal tendency in human action,” no longer inhibited by “ignorance, or by blind custom or by authoritarian commands,” can “take its proper place in human life.”

Historical breakthroughs are not a dime a dozen, but they happen. In other words, we make progress. (The “we” here is a problem, but for the moment let’s assume we’re talking about the citizens of Western nations; obviously, a potentially disturbing tendency of this particular master narrative is for us to experience an imperative to spread the good news, to help others make the same breakthroughs, in both their political structures and their ways of thinking.) We try to do better than our ancestors. We try to get things right. We try to see things clearly, and we act (in the ethical arena, for example) accordingly, even if our actions wouldn’t have made sense to those who came before us. We slowly overcome areas of ignorance, subtracting the beliefs or customs that functioned according to past ignorance, and increasingly the world becomes a better place, if only slowly. We are realistic about change and how it comes about, and yet that change moves us forward, toward a better realization of what it means to be human beings.

Taylor resists this story, both in its content and its form: he has another story to tell of how we’ve gotten to where we are; but he also wishes not to make the mistake of naturalizing the product of recent history (i.e., the theoretical move whereby impediments are removed and we find ourselves more free to be ourselves). Taylor presents a different master narrative that sees the present as a contingent achievement, leaving us with both less ability to deny its humbling link to the past and more ability to claim outright creativity. There is nothing natural about the present, nor anything remotely inexorable about its path. And as I’ve already implied here, and written before, I find Taylor’s competing narrative more convincing, more realistic, and more true. Taylor’s narrative makes a whole lot more sense than the Pollyannish, humanistic story of our progress (outlined above), which is largely just assumed, especially by Western politicians and cultural commentators who use it for their own purposes. That story makes us feel good, and offers us a reassuring distance from our limited if well-meaning predecessors, but it also seems wrong to me. An instance of how this bad narrative is applied to a current dilemma (the issue of homosexuality and same-sex marriage and religious opposition) will be the subject of the second part of “How It Works.” I’m trying to get to the nitty-gritty, where things can get a bit more uncomfortable (at least for me).

Standard
Uncategorized

Something More

Just a quick first response to Daniel’s last post, a kind of promissory note for more extended meditations.

Taylor is a proponent of the “something more” school.  This, what’s right in front of us, can’t be all there is.  (The Peggy Lee question.)  The counter position, championed by Stanley Cavell most vociferously among contemporary philosophers, is to embrace “the ordinary” and to chastise philosophers in particular and humans in general for their failure to value the ordinary and to, again and again, sacrifice the ordinary in the name of some vague “something more,” some unreachable transcendent.

I don’t know how to adjudicate this debate.  I am (no surprise) inclined to think the quest for the extra-ordinary, for the transcendent, is a tragic flaw, perhaps (if we want to be grandiose) the mother of all tragic flaws, the flaw that brings humanity to its knees (or, more accurately) to inflicting and undergoing suffering repeatedly, ad nauseum.  If only we could be satisfied with what the world gives us and not keep whoring after strange gods.  But–to keep with Daniel’s wondering about “reasonableness”–I don’t know how you would even begin to persuade someone that hankering after “something more” is a sucker’s game.

Certainly, we know (if we know anything) that telling someone that this “something more” does not exist (there is no God, there is no after life) cuts no dice at all.  What other forms of argument, of persuasion, are open to us?  Saying that believing in the “something more” has terrible consequences doesn’t work either.  In fact, Taylor makes the symmetrical argument.  He says the inability to believe in transcendence, in something beyond, has terrible consequences.  An impasse.

Standard
Daniel Hayes

Making Headway

I wrote before of a few things I liked about Charles Taylor’s A Secular Age—or at least how it had me thinking in new ways. I want to write more about the book and its influence on me. First, though, I want to list some complaints. I need to reread some passages, to make sure that I haven’t misunderstood or missed something; but, with that qualification, here are some quick concerns I have with Taylor’s approach.

One of the things I like about Taylor is his emphasis on history and how we can be blind to our own assumptions, or blind to how an earlier era’s assumptions must have appeared irresistible (whereas now we find ourselves baffled by them). I think Taylor refers to this, in a more philosophical light, as an epistemological question—what we know, and how we know what we know—and he thinks of himself as working “hermeneutically,” being quite modest about any search for truth or end point to thinking. At any time in history, ours is just a point of view, a way of making sense of something. And what’s contested, often without our knowing it, is both the way and the something; in other words, contemporary questions often pose dilemmas that were once not seen, even remotely, as dilemmas.

But I think Taylor plays a little fast and loose with this hermeneutic method. When he’s describing his way of working, he’s not embarrassed to say that his approach is linked to other thinkers, including Heidegger and even his postmodern followers. (Taylor has a lazy way of speaking of postmodernists as a group—and, almost always, a misguided, faddish group.) But later in the book, when he is obviously turning to his own religious faith, Taylor is hostile to postmodernism and, I think, to a good chunk of the thinking that has informed his unique narrative of the ascendance of secularism. In large part, that narrative is unique because it resists the idea that truth—in the form of science, and especially in the form of a reason championed by Enlightenment figures and their contemporary cronies—is something that is evolving, developing, helping us to see things more clearly. (The progressive, subtractive story of secularism, whereby we give up our ignorance and see facts as facts.) But this perspectivism—a humility in the face of history—is mostly missing from the latter part of the book, when Taylor is more often pointing to what might be construed as universals.

One example of this tendency is the emphasis Taylor puts on sex and violence as particularly thorny problems of the human spirit, no matter what historical era. At first, it seems that these are simply a window through which to watch the transformation of thinking over the centuries, the way Christianity got transformed into a benevolent humanism. But pretty soon it becomes clear that Taylor is suggesting that sex and violence are “problems,” and problems that, as the saying goes, have been with us forever. This leads Taylor to inquire into the best ways of countering the worst of violence, and of course it comes as no surprise that Christianity—at least the kind that doesn’t simply condemn violence wholesale, or the kind that traffics in it—offers a compelling way of dealing with violence. Not that Taylor is intolerant of nonreligious ways of combating their own violent tendencies: “Both sides have the virus and must side against it,” he writes. That this virus is a part of the human condition, that violence is akin to a sickness within us—oddly, this seems obvious to Taylor. So much for historical peculiarities.

Another example—of how Taylor takes what might be temporary, a historical peculiarity, and then subtly turns it into a human universal—is the issue of eternity. It would be one thing if Taylor only wrote disparagingly of a certain pleased-with-yourself, scientistic attitude, as though we moderns had moved beyond ignorance and the childlike desire to achieve eternal life; but he goes on to suggest that there’s something universal about the idea of eternity—and, giving examples from everyday life, suggests that part of what makes us human beings is the way we feel dissatisfied with our immediate situation, and especially our temporality. Again, if this is true—and Taylor has discovered something universal, something about which anyone would agree upon, at any given time (as a topic, a concern, an enduring dilemma)—Christianity, or Taylor’s brand of Catholicism, seems pretty interesting as a way of thinking creatively about issues of life, death, and the beyond. But one could easily imagine a second Taylor taking the first Taylor to task for failing to see the particular, historical nuances of ideas of temporality and human desires to overcome it. Is eternity so obviously eternal?

One argument of Taylor’s that seems particularly powerful to me is his attack on the centrality of meaning in contemporary discourse—its emptiness as an idea worth fighting for. There’s a point when he suggests that no one dies for meaning, no one sacrifices everything for meaning, even though people supposedly think that of meaning as the highest good and feel reverence for its variety, the diversity of ways of achieving it. (Hence, the liberal idea, the tolerant idea that we should simply encourage meaning, no matter whether that meaning takes on a religious attitude. The “good,” as it were, is simply to face the crisis of meaning and give an answer—any answer, outside of violence.) Part of the appeal of meaning, besides its susceptibility to notions of tolerance, is that it seems benign, undeniable, universal. Everyone needs meaning, right? Who isn’t trying, in one way or the other, to figure out the meaning of life? But Taylor’s historical analysis is convincing in suggesting that meaning is a modern concept—that is, a way of asking a question that wouldn’t have necessarily made much sense to people of prior eras. And so if you’re going to act all tolerant and understanding about different forms of meaning—well, it’s best you go to the trouble of understanding that your warm understanding is a conceit in the first place.

This I like. What I don’t like is how Taylor seems to resurrect meaning, this idea of a universal philosophical question that so much appeals to the contemporary psyche, once A Secular Age enters more personal territory (i.e., Taylor’s form of Catholicism, and his discussion of the All-Star team he’s put together of interesting and not-easily-dismissed Catholic intellectuals and poets). He doesn’t call it meaning, but Taylor does introduce in its stead the notion of something more, something greater than the bare-bones immanence that so many of us like to profess. In some ways, this has been a topic or theme throughout the book—how a purely materialist view hides its own assumptions, its own belief in, let us say, something less. (Something zero?) Taylor doesn’t quite use the word transcendent to describe this inevitable and ineluctable yearning, because transcendent fudges the difference between the immanent and the material, or too narrowly defines it as religious); and yet he speaks of those serene moments when you’re lying in a meadow and looking up at the sky, through the trees, and…and you get a little teary-eyed about everything so large and big and beyond everyday life. Everyday life can seem so…so mundane; and who amongst us really feels that that’s all there is? Don’t we want more, no matter what those stuffy postmodernists might say? Isn’t it fair to say that we all want more?

Increasingly, as the book goes on, Taylor presents himself as simply being the most reasonable guy in the room. This is not Taylor at his best, at his most provocative. Suddenly there’s a hope in Taylor that seeing everything, from all points of view, will allow him to mediate amongst the views and come to actually see things clearly. (Oddly, it’s an almost Enlightenment faith in reasonableness, if not quite reason.) Taylor is forever using optical metaphors—seeing things, gaining perspective, suffering blindness, being shuttered. “Understanding another approach can free us from the blindness that attends a total embedding in our own,” he writes. As a result, Taylor ends up almost always sitting smack in the middle of whatever controversy he’s discussion, distancing himself from those he characterizes as polemicists—easy targets, usually religious zealots or hardcore secular atheists. Let’s not throw the baby out with the bathwater, he seems to be saying—as though anyone were busy endorsing infanticide.

“One can respond to this difference [of opinion] polemically and judge that one or other was bang-on right and the other quite wrong,” Taylor writes at one point. “But we can also see it in another light. Neither of us grasps the whole picture.” Good enough, and there’s a modesty here that’s admirable. But it seems as though that “whole picture” is exactly Taylor’s goal, and he even goes on to suggest that maybe we are making progress in that regard. Perhaps Jonathan Edwards failed to “extend the courtesy” of listening to others and weighing opposing points of view, but Taylor obviously sees himself as capable of getting closer to the “whole picture.” And this gives him hope that we may, after all, be getting somewhere. “We have perhaps made some modest headway toward truth in the last couple of centuries,” Taylor writes, not noticing that the earlier part of the book was so interesting exactly in suggesting that headway wasn’t a good way of thinking of secularism or much of anything else.

Standard
Uncategorized

Limitations

So I, too, have been reading Taylor and find him, as always, incredibly lucid, prolix, thought-provoking, and infuriating.  But, in the end, good to think with.

Right now, I’ll just get down three quick thoughts in response to Daniel’s last post.

1.  I, too, declare myself hopelessly guilty of limited imagination when it comes to the past.  I have been teaching King Lear the past two weeks, a play I dearly love and one I have a particular “take” on.  My students are sharp and one of them, last class, basically said: “your reading of the play is what someone in 2015 might say.  But it can’t possibly be what Shakespeare meant by it.  Shakespeare lived in an entirely different universe than us.”  I think my student is right and I both respect and read scholars who work hard to illuminate how Shakespeare thought as opposed to how I think.  But I also have to confess that I don’t have enough interest to just read things for their historical content.  Unless it speaks to me and my concerns, unless it is a resource for my thinking about the things that matter to me, I give it a pass.  Primal narcissism of some sort.  But also a sense of urgency.  I’ve got this life–and it’s damn hard to figure out how to live it.  Certain books seem to offer me some clues.  And its those books that serve my quest that I want to spend time with.

2.  The Taylor I have been reading (essays in his latest collection) keeps returning to the notion that “life” should not be the supreme value.  He doesn’t deny that “life” should be a key value, but it should be accompanied by reverence for “something beyond life.”  It’s not utterly clear what that something is, but the general idea is that there is a transcendent that exists outside, beyond, or alongside (?) life and thus tempers our allegiance to it.  I should add that what “life” means seems more clear to me.  It means that life is all we’ve got; that we have this time on earth and that we should devote ourselves to making the most of it.  This entails a) the ethical/intellectual project of figuring out what way of living does make the most of a life; and b) extending this “right” to make the most of one’s life to every human and, perhaps, to every living creature; and c) claiming that anything that diminishes life, that deprives a living being of the means and opportunity to live a flourishing life, is ethically wrong.  What Taylor is saying is that (c) shouldn’t rule the ethical roost.  That there is another value–something “other” than life–that tempers using life as the only standard of measurement.  But I am not clear about what that “other than life” thing is and not clear how it plays out in actual ethical judgments–either judgments about whether some thing is good/bad or judgments about the best way to live a life.

3.  Rationalism and autonomy.  I think Daniel has pushed me to where I can at least state what my commitment is.  This has two parts.  First, I am committed to controlling as much about my life–about how I live it and who I live it with and within which contexts–as I possibly can.  I think (and maybe this is a pathetic echo of Sartre’s blustering about the difficulties of freedom) that some people take a different route; these people (call them believers) find peace and satisfaction in ceding that will to control to some authority.  “In his will, our peace” (Dante).  I do not want to stand in that relation to authority.

Second, what are the limits to what I can actually control in my life.  I am perfectly ready to concede that I can control much less than I wish I could control.  And I am even (although more reluctantly) willing to concede that I may be (forever?) unable to know (or name) the various things that limit my control.  I can even accept that I am deluded about how much I control.  But none of that undermines for me the basic project of trying to control, of trying to make my own choices and working to have those choices lead to the outcomes that I aim for.  I don’t know any other way of living.  Which, doubtless, is part and parcel of living in our individualistic modern age.  But I can’t find it in myself to willingly cede to all the powers that limit my ability to control, that limit my autonomy.  My fight is against those powers, even if it is a fight I can’t win.

Standard
Daniel Hayes

Historical Peculiarities

[Apologies for my absence: I have been dealing with real estate—the selling, the buying—and what’s come to be referred to as “relocation.” Along with my family, I have been deep in the process of such relocation, with its attendant anxieties.]

I am almost finished in reading Charles Taylor’s huge A Secular Age. I’ve also been reading some essays about Taylor’s book, and so far I’ve been selfishly disappointed. What seems so strange to my eye, in reading Taylor, doesn’t seem to register with other folks. (Though these same folks do have many interesting things to say, and many compelling criticisms.)

One of my earliest, most primitive responses to Taylor’s story of secularism was a kind of humility bordering on embarrassment. Embarrassed of what? Embarrassed at not fully realizing how the everyday ways that I think about life, philosophy, death, religious categories, etc., would make very little sense, even conceptually (especially conceptually?), to people living in a different era. I feel embarrassed at not having sensed the extent to which I wear blinders or fail so miserably in getting my head around the degree to which any of my ideas or goals or values are dictated by my setting, particularly in a historical sense. My thinly veiled contempt for prior ideas of the good life is, I think, worthy of contempt. For instance, the very way I conceive the question of religion/secularism (seemingly posing the obvious tension between transcendent and immanent categories) is itself historically peculiar. This doesn’t necessarily alter the immediate answer I provide to this question—hey, I’m a good atheist!—but it certainly has a way of putting me in my place and making me feel deeply the parameters of my own thinking.

Taylor’s claim—which I haven’t yet been able to refute—is that we are all secularists. Everyone, including the religious folks, live in what he calls “the immanent frame.” No one really believes in the supernatural in the way that it was once believed in, and religious behavior, of whatever flavor (but particularly contemporary Christianity), is utterly colored by the move toward secular categories and the devaluation of old ideas of the supernatural. We live in a materialist world—all of us. In fact, the advance of secularism is genealogically tied for Taylor to certain fundamental and seemingly irrevocable changes in Christian theology. Far from being its opposite, secularism—or at least what’s commonly called secular humanism—is just a recent, very interesting development of Christian ideas and values. Which is the answer to the question of why Philip Kitcher, the good humanist, takes a walk with his liberal Christian counterpart and feels so much of a melding of minds.

There’s much more to Taylor’s argument, of course. And much more to quibble with or disagree with. But I want to move to John’s claim, over a month ago, that he resisted the idea of kowtowing to “authority,” and balked at the idea that there might be a religious underpinning to his own ideas (despite his best efforts). I had worried that this conceit was tantamount to claiming that one could resist the unconscious, speak of making “rational choices,” and take on a kind of freedom only betrayed by religious thinking. John was suggesting that we can choose to live under a given brand of “authority,” or not. That seemed too easy to me. And it still does. It fits too neatly into what Taylor refers to as the “subtraction theory of secularism,” whereby we simply rid ourselves of bad thinking and deal with the aftermath (which isn’t always pretty, but at least we know who’s in charge).

John’s response to my skepticism was to speak of “autonomy,” and its relation to ideas of authority, and to argue that one can, and probably should, aspire to this kind of Kantian autonomy, with the proviso that of course it’s not possible for anyone to achieve “complete autonomy.” Unquestioned here was what John called “the choosing self”—a kind of operator who reacts to stimuli, influences, and parameters and does the best he or she can to fend off what seems adverse to human flourishing.

At first glance, this seems a modest proposal. And, after all, what’s the alternative? That “I must be in thrall to forces of which I am unaware”? To what extent unaware? John asks, speaking of the uselessness of determining exact percentages of autonomy or the lack thereof. But more to the point, John is suggesting that it doesn’t much matter anyway. What difference would it make that my “choices” are not, either fully or even partially, determined by me? “Don’t we worry about the unconscious only when it is sabotaging us?” John asks. “And how could we have a criteria for what is sabotaging us unless we had an independent way of judging what is good for us?”

Here I think John is revealing the cards he’s playing with—the reliance on “an independent way.” Independent of what? And where would this way come from? Are secularists more independent in this way than religious folk? Is John really arguing for what is essentially an Enlightenment view of independent reason? And yes, John is right in saying “this is classic rationalist Freud: where id was, ego shall be.” But I’m surprised that this old-fashioned idea of psychoanalysis is being rolled out as a way of dealing the quandary of secularism, or even the problems of life. Most contemporary psychoanalytic theorists would cringe a little at this picture of the “rationalist Freud.”

I suggested above an idea that my way of thinking, or John’s way of thinking, is “historically peculiar.” No doubt John would agree with this idea, though he might then ask, as always, “So what?” And I concede that I’m not sure about the implications of my thought, and the particular force it had for me in reading Taylor and thinking about secularism. But what I’m resisting here is the category of “an independent way of judging what is good for us,” which seems not only not independent but very much missing the experience or the register of what it is that’s so peculiar. Liberal ideas of choice, and rational notions of independent agents assessing the possible ways that that choice might be sabotaged, so as to further both the rational and the independent—all of this seems foolhardy. I know, I know—that’s a harsh judgment, and especially for someone who has no independent way of judging.

Standard
John McGowan

On Tolerance

We are back to the book, Political Theologies: Public Religions in a Post-Secular World, edited by Hent De Vries and Lawrence Sullivan (Fordham UP, 2006).  Now we’ve read the essay “Toleration without Tolerance: Enlightenment and the Image of Reason” by Lars Tonder.  (If there’s a way to put the proper slash through the “o” in Tonder’s name, I don’t know how to do it.)

An interesting essay.  But I have a big stumbling block, so I am just going to discuss that problem in this post.  Several times Tonder says that tolerance needs to be understood as “the disposition to endure pain and suffering” (328).  He never really explains this usage of the term.  The closest he gets is the following statement: “If it is the case that tolerance concerns the endurance of pain and suffering, and if it is the case that this endurance is what determines the difference between the tolerable and the intolerable, then it is also the case that these dispositions are of great importance to the overall discussion of tolerance and toleration” (340).  He has, earlier, made it clear that “toleration” is “the institutional framework accommodating minority groups, through principles such as free exercise of religion and separation of church and state” (328).  Against this institutional framework, he poses the “disposition” that is tolerance, which he later characterizes as a “political sensibility” (335).

But I don’t understand how tolerance is a disposition to endure pain and suffering.  I do see that here is one way to use the word “tolerate”:  How much pain can I tolerate?  And I would be open to Tonder’s explaining how that common usage could be tied to the notion of tolerance.  But I don’t see where he offers that explanation, and in its absence I am baffled.  Tolerance, especially in tandem with what he calls “toleration,” is about a political sensibility that refrains from punishing others for choices they make that go against my fundamental convictions.  Tolerance involves a cultivated indifference to the choices others make–swallowing my disagreement or disapproval in favor of an awareness that granting the other this freedom is conducive to peace, even if it falls far short of harmony.  So when Tonders tells us we want “tolerance because [it} includes the attitude or sensibility that makes it possible to endure pain and suffering” (333), my reaction is I don’t understand the goods that tolerance delivers in that way at all.  Instead, when he talks on the next page of “forbearance” and the way that “forbearance . . . drowns discord,”  I feel on much more familiar territory.

No need to beat this point to death.  I just want an explanation of the connection between tolerance and endurance of pain.  Or an account of how thinking about tolerance in this way is useful for an examination of how and where the term tolerance is deployed in political thought and in political practice.

Standard
John McGowan

Scripts for Change

Another theme that’s been left hanging too long.  Daniel quite rightly asks me to provide a plausible account of how a change in sensibility might occur.  And here I am going to be much more sociologist friendly than I was in my last post.

I think individual moments of conversion (Saul on the road to Damascus) can occur.  Someone can be convinced by an experience, or something she reads, or some other kind of input to reorient some basic tenet of her life.  But I think that is astoundingly rare.  We are, as William James was fond of saying, overwhelmingly conservative in our basic sensibilities and convictions, especially once we leave the labile years of childhood.

Instead, I think it much more likely that changes in sensibility are achieved only through the work of sustained (over considerable lengths of time) and organized social movements.  You need an abolitionist movement, a suffragette movement, a civil rights movement, an anti-abortion movement to create change.  Much of my impatience with what passes as politics among the academic left is its disinterest in and distaste for the building and sustaining of such movements.  That’s hard political work, but, so far as I can tell, essential to effective political work.

Political work because I think it helps the movement if it has concrete legal goals (like gay marriage, or voting rights acts, or getting women the vote, or overturning a Supreme Court decision about abortion).  Such a goal gives the movement a tight focus.  At the same time, the movement also has  a) to offer a community to its members; b) to present in the public sphere exemplary instances of and narratives about what a life following this different sensibility looks like; and c) to recruit actively new members.  What it has to offer those new members is membership in that community and participation in that changed way of life.

In sum, a change in sensibility brings with it a new way of understanding who one is–and a new way of living one’s life.  And those two things are very hard to sustain outside of a community of like-minded others who praise the individual for having the right sentiments.

Standard
John McGowan

I’ll Take Oranges

The back-and-forth between Daniel and me increasingly seems to me to hinge on the aspirational–a term we have used before.  In this case, the aspiration is for autonomy, in the classic Kantian sense of choosing to follow only directives of my own devising.  Placing the ultimate authority in the self.  Daniel’s response is that autonomy is awfully difficult, maybe even impossible, to achieve.  As a reasonable guy, I respond: “Yes, the choosing self is influenced by all kinds of factors.  Total, complete autonomy is, no doubt, never achieved.”

The second issue, almost a paradox, is that admitting that full autonomy is never achieved appears to entail something like the “unconscious” to which Daniel refers.  After all, if I chose which authorities to follow, I would be exercising my autonomy.  So it follows that I must be in thrall to forces of which I am unaware.  I make choices in ignorance of what factors external to my self are, in fact, influencing (perhaps even dictating) those choices.

At this point in the discussion, I want to ask 1) what percentages and 2) what difference it makes.  The percentage question is what percentage of my choices are not autonomous.  Is the sociologist denying that autonomous choices are ever made?  Are 50% of one’s choices autonomous? 15%?

The second question comes up when I choose oranges over apples.  Is my choice there really connected to unconscious determinants?  And, if so, what difference does that make?  Isn’t being non-autonomous only an issue when it seems like I am making choices that are counter-productive or self-destructive?  Don’t we worry about the unconscious only when it is sabotaging us?  And how could we have a criteria for what is sabotaging us unless we had an independent way of judging what is good for us? That is, we must have some point of view we trust as representing our “true good.”  If we simply believe we are stumbling around in the dark all the time, following authorities we can’t even correctly identify, no less assess, I don’t see there being any place to go.  It’s just ignorance all the time.

In short, I think the aspiration to be autonomous is pretty wide-spread.  And it is only in the context of that aspiration that worries about the limits of autonomy make sense.  It’s the damage that can be done by being in thrall to outside forces that worries us.  That’s why I’ll take my stand in trying, to the limits of possibility, to achieve control over my choices.  This is classic rationalist Freud: where id was, ego shall be.

Standard
Daniel Hayes

Apples and Oranges

John’s most recent begins with a refusal that assumes a choice: “I resist,” he says, and “will continue to resist” (if given again the choice, presumably), the essentially religious.

Of course John’s “stand” assumes the nonexistence of what he’s refusing—how could it be otherwise? He’s a free man, not some underling of God.

John is saying something on the order of, “I refuse to believe in magic,” or “I refuse to believe in Santa Claus.” Or, more precisely, “I refuse to believe that I can’t have a good Christmas without believing in Santa Claus or some alternative fantasy figure that I don’t believe exists.” And I think he’s right to sever any necessary connection between belief, broadly understood, and transcendent categories. (Currently, I’m a big believer in my daughter, but I’m certain she’s less than technically divine.)

That John believes something is obvious. In this instance, in his act of resistance, John is believing in choice—that is, in his ability to pick and choose (whether he’s any good at it or not). But it’s the picking and the choosing that troubles me. And it’s what gave me pause when I listened to the Bellah interview.

Religion for John bespeaks authority. And he takes his stand—saying that only he will decide what to think and what to do, no matter his own limitations. That one can make this claim is certain. But my worry—and again, it’s a worry, not a belief—is that resisting the authority of religion might be akin to resisting the authority of the unconscious. Hey, go ahead, stand your ground! Flex those muscles!

Am I taking the concept of authority too broadly? After saying that he reserves, and cherishes, the right to change his mind, John sums up: “Unless we all are operating under a similar concept of and relation to authority, I don’t think we are all religious in a way similar enough to matter.” But here exactly is the question, and it brings out the skeptic in me—or at least the sociologist. Do we really choose our “authority”? And even if we do, what are the limitations in our choosing? Is it like choosing between apples and oranges, or more like choosing a Fuji over a Red Delicious?

Standard
John McGowan

Quick Thought on Flypaper

I resist–and will continue to resist–the notion that having a belief in something, even a belief that cannot be fully justified in rational terms, is essentially religious.  The same for the corollary notion that since all of us have such beliefs, that all of us are religious.

Why this resistance?  Authority.  To be religious is to grant authority to that thing beyond oneself in which one believes and which one also takes as the source of other beliefs.  Authority over me–and, all too often, authority over others.  I do what I do because God commands it or requires it, the religious person says.  And that God also commands or requires you as well as me to do and believe these things.

Non Serviam.  The endless cry of the secular.  That’s where I take my stand. I will accept no authority beyond my own convictions, illogical or logical as they may be deemed.  The secular person says “nothing is sacred.”  If I take something on faith, it’s because I affirm it–and I am extremely leery of imposing that belief on another, and ever mindful (I hope) of my fallibilism.  I reserve–and cherish–the right to change my mind.

So, unless we all are operating under a similar concept of and relation to authority, I don’t think we are all religious in a way similar enough to matter.

That still leaves the notion of the “holy.”  I have no interest in (or patience for) a law-giving, punishing, commanding God.  But I am more open to trying to think about the category of the “holy.”  What is invested with meaning, replete with it, for me?  And how do I recognize such things, and establish an appropriate relationship to them?  That question does interest me.  Which is why I like the Flanagan book.  He is trying to think about these questions of meaning.

Standard
Daniel Hayes

Like Flypaper

Recently I listened to an online interview with Robert Bellah in 2009. Roughly thirty years before that date, I used to have my weekly meeting with Bellah, who was then my senior-year thesis advisor. And so watching this interview was a little strange; Bellah was to die just a couple of years later, and yet at this point he showed no signs of age, at least intellectually. He seemed no different than when I knew him—same mixture of kindness, intelligence, and occasional arrogance.

Anyway, watching this interview made me realize how confused I am about secularism, which is pretty much the overall topic of this blog. The interview with Bellah was largely about just this issue. I found myself agreeing here, disagreeing there, but mostly feeling as though there was no way to get a handle on the topic of secularism. I feel sometimes as though I’ve traveled a great distance and found myself right back where I started.

Bellah was saying very nice things about Charles Taylor and his “take” on secularism, and so you can imagine the basic idea. (I haven’t read Taylor, but I’m about the dive into the big book. I’ve hesitated up to now because he’s not a good writer. But I also need to educate myself, I realize.) Secularism is real, a historical phenomenon, but—as Bellah put it—it’s not as though this “secularism” is really opposed to the religious; in fact, it’s a product of everything that came before it (namely, religious ideas). In that sense, you might say that secularism is another religion; or you might say that it’s simply a development of a religious way of thinking.

Bellah wants to go a little further than that, of course. And so “a religious way of thinking” isn’t one of the dishes to be chosen from an intellectual smorgasbord; it is more or less inherent in what it means to be a human being. We traffic in the transcendent, in other words. (He repeatedly mentions, in the interview, the example of “human rights” as a category in this spirit—one obviously to his liking.) The clincher for Bellah, the sociologist, is Durkheim’s theory of religion—its utter necessity, its inevitable existence. To imagine human beings without religion, without some transcendent category, is to imagine human beings without social organization.

All of this gives me a headache.

Bellah always had a wonderful ability to smile, and a good repertoire of different smiles. One was a sort of understanding but somewhat condescending smile, and this is the one that he shows when confronted with the idea of somehow overcoming religion (even theoretically). Oh, if only that was possible, he seems to be saying. And I have to admit, I find myself persuaded by this idea of the inevitability of the transcendent, even as I wish to resist it. Maybe secularism is a mirage; maybe God never dies. Maybe he doesn’t even go into hiding as much as he takes on various disguises. Or maybe, in true humanist fashion, we worship ourselves—always have, always will—by imagining ourselves writ large in some faraway sky (sometimes called theory).

In other words, maybe this entire game of secularism is a variant on Whack-a-Mole: we wait for another bad theological remnant to show up, and then we get rid of it—only then to see another remnant pop up. We act as though we might actually cleanse the deck of the theological, but the joke is on us. We think foundationally, can’t help it. Saying you’re an atheist is like saying nothing—waxing eloquent on the tip of the iceberg.

You can see the appeal of Dawkins and Hitchens and Dennett. God is dead, heaven doesn’t exist—not even evil. That’s simple enough, and I’ll sign my name to that. But once you get past that—the question of the existence of something—you’re left, it seems to me, with the very messy question of ethics or guiding principles or whatever the fuck you want to call it. In other words, you still have to sign up for something. (Every day, it seems, stumbling out of bed, eyes barely open to this so-called world.) And in the signing, in scribbling your name next to what you believe, or what you value, you seem to be making a bunch of assumptions of the foundational kind. Maybe Rorty can shrug his shoulders, so as to say So what? (This is the appeal of poof! A problem suddenly isn’t a problem. James, Rorty, and Wittgenstein all have this appeal.) But I find myself uncomfortable, confused, somewhat defeated. Am I destined to be a believer? Is religion like flypaper?

Standard
Daniel Hayes

“Changes in Sensibility”

First off, I appreciate John’s response—his attempt to continue the dialog. And I agree that there is no “killer argument” against the practice of putting human beings and animals in two, almost entirely separate containers. I think Darwin’s discoveries, and his theory of evolution, is about as good as it gets. And there’s a strange, intriguing story here about how that theory’s applicability is largely ignored; and when Darwin’s theory is even taken seriously, it’s misunderstood in frightening ways. As always, I’m interested in why that is, how it came to pass that evolution makes little difference in how we conceive of ourselves (irrespective of issues of how we treat other animals). But of course science is a discourse, and so the question isn’t about why people don’t accept facts but why the so-called facts have so little currency.

As for “changes in sensibility,” I find myself both baffled by this as an explanation of change and suspicious of its import. John’s story of why slavery is no longer a tenable idea—people woke up to the thought that they wouldn’t want their own children made into slaves—involves a kind of realization (of the extent of the circle of concern?) that seems suggested after the fact. What took people so long? Given John’s version of the end of slavery, it’s hard to imagine anyone in America ever supporting it. I guess we could say that what changed was a certain “sensibility,” but I’m not sure that this is anything other than a reiteration of saying that change took place. If political change is just a series of “changes in sensibility,” why are we so charged up about it?

And why do people’s sensibilities change? I know that that is a very hard question, but I think it deserves an answer, if only sociologically. Especially for those who find themselves in disagreement with political conditions, it’s not enough to suggest that we wait around until, magically, sensibilities change. I know this isn’t John’s position, because I know he fervently believes in “political action.” But the idea that political change is best described as a “change in sensibility” offers a political actor little to go on.  John admits that “reasons for revisions are good to have,” but he goes on to say that they aren’t “decisive.” Okay, but then what is decisive? In other words, what other choice do we have than to offer up “reasons for revisions”—that is, what choice do we have than to engage in a conversation where we seek agreement? And how would we seek such agreement if we didn’t offer reasons?

This isn’t about being a rationalist, or making the mistake of thinking that iron-clad philosophical arguments, constructed in ivory towers, would save the day if only we could get people to listen. But I’m not sure I agree with John that that “the kinds of conviction that do lead to change are better described as shifts in sensibility than as intellectual conversions.” For one, this is assuming that intellectual conversions (of which there must have been plenty in 19th century America) are a matter of people working logically and unemotionally with pen and pencil. But I also wonder what John thinks is a point of leverage, a way of creating change—if not the emotionally engaging work of words and ideas?

And this is where I think John’s nod toward Wittgenstein wasn’t only about the technical business of categories; it went further in saying that there was no explanation for practices. (“The Wittgenstein move is to say that the practice itself is all the ground we are going to get. There is nothing else that is going to explain or ground it.”) In other words, what can we do with practices but chalk them up to “sensibilities” and then go about the business of describing them? John speaks of having a moral sensibility, and we can presume that he might be able to describe that sensibility, but how—other than by description—might he influence others, who don’t have his sensibility, to take on his sensibility? Isn’t this what it means to be political? Am I missing something here?

Maybe I’m being difficult here. After all, John isn’t saying that we shouldn’t talk about things, have discussions, attempt rational persuasion, etc. He’s simply being a realist—political change takes place for reasons (if that’s the right word) that are difficult to understand and have more to do with emotions and feelings and sensibilities than with Hollywood versions of intellectual conversion. Go ahead, have the conversations, try to persuade others, but don’t be under any illusion that morals are a rational matter. They aren’t.

All of this seems true and reasonable. But there’s something about the measured tone that bothers me. (Here I’m thinking again of Rorty doing his shrug.) Where’s the fire? Where’s that Emersonian attempt to face yourself, to explain your point of view to others as though you were speaking for everyone? And why the sheepish business about not having “a moral leg to stand on”? Sensibility may be as good an explanation of change as any other, but does an ethical actor simply refer to his sensibility? This is of course an old question, about what is a given and what is decided, and—minus religious certitude or the Enlightenment equivalent—it’s an ongoing quandary.

Standard
John McGowan

Human Rights Abuses

Since I made a short, cryptic allusion to this issue in my last post, I am going to take an opportunity to clarify the human rights point that Daniel made some time back.  Basically, if I understand him correctly, his point was that a polity (like the United States) that encodes a set of human rights within its legal system can then 1) complacently consider itself a realm where “human rights” are covered (haven’t we guaranteed them constitutionally?), so that violations of those human rights are ignored because we just don’t do that kind of thing; and 2) become the “enforcer” that, from its position of righteousness, punishes people in other countries who are labeled as violating human rights.  Thus, both internally and externally, the very codes that establish human rights give the US (the relevant example in this case) almost complete carte blanche for abuses of those rights.  Self-righteousness is (perhaps) more dangerous that outright villainy.

This paradox about legal codes–about the way that such codes both creates criminals and gives established institutions legitimate grounds to act violently toward subjects (Weber on the state as having a monopoly on the legitimate use of violence)–is, I think, undeniable.

So I don’t have a solution to offer.  Which means I am back to two things I keep trying to insist upon. 1. That no moral code can guarantee against its misuse or its violation, extending to those who will pervert the code to justify their violations of it.  There will always be people who game a system precisely to violate both its letter and its spirit.  That’s why “eternal vigilance” is so necessary.  I am tempted to be hyperbolic and say that “eternal vigilance” is the most important political virtue.  One has to be watching all the time and to cry foul every time power is abused.  Power is not, ever, going to play fair.  It will use every trick in the book to maintain and augment its privileges.

2.  Even aware of its imperfections, of the ways that its vocabulary and statutes can be perverted, I still want to say that human rights codes are good things for the underdogs to have in their arsenal in the battle against power.  The historical record here is, admittedly, spotty, but there are some notable successes to point to.  Occupying the moral high grounds (as the American civil rights movement did) is not always a winning strategy, but it can be.  Revealing the hypocrisy (as well as the perfidy) of power can be effective at time.  The human rights vocabulary provides a way of naming the abuses–and a code that enjoys pretty wide spread endorsement.  It is hard for power to simply declare that it doesn’t give a fig for human rights.  So this need for at least lip-service to human rights can be used sometimes as a lever.

None of this undermines the seriousness of Daniel’s worry–or the fact that we have seen much abuse of human rights enabled by invoking human rights over the past 15 years in this country.

Standard
John McGowan

Practices and Morals

I am going (for once) to try to respond directly to the prior post.

I’ve got two dogs in this fight (already using animals!  Would anyone say “I’ve got two children in this fight?”)

The first is about categorization.  I was using Wittgenstein to say that we, in linguistic practice, make some distinctions by placing things in categories.  And that, as a matter of practice, we usually quite easily put humans in one bin and dogs in another.  I don’t think such acts of categorization are eternal or un-revisable.  I brought this particular animal/human distinction up because, among other things, I wanted to point out just how radical Darwin’s work is–and just what a steep hill it is to climb to get people to really think of humans as animals, to begin (as a matter of speech and thought) to think that humans and animals belong in the same category.  Yes, there are arguments and reasons to offer for shifting the prevailing practice of thinking of humans and animals as very distinct.  But there are no killer arguments, no “grounds” that will just make our former practice obviously mistaken.  Still, we might be persuaded to begin to characterize things differently by the arguments and reasons someone offers for why a change in our current practice is advisable.  We just aren’t going to discover some “essential” fact about humans and animals that “proves” they are the same.

So that brings me to the second issue.  Like Rorty, I believe that changes in moral beliefs are changes in sensibility.  It isn’t so much about arguments as it is about changes in feeling.  So slavery is a great example.  The primitive feeling (the basic moral intuition) is that I would not want to be a slave.  As a condition that I would have to endure, I cannot endorse slavery as a life I would want to lead.  From there (for those who were pro-slavery), there are basically two ways to go: 1.  I impose slavery on those I have the power to dominate.  Slavery is what happens to prisoners of war etc. 2. There are some people who deserve to be slaves, because they are criminals or not fully human.

Path #1 seems to be that of the ancient Greeks (if I understand them correctly).  The Greeks (at least prior to Plato and Aristotle, as represented in Homer and the tragedies) didn’t pretend anyone deserved the horrible fate of being a slave.  It was just bad fortune that made you a slave.  Your side lost the war, and you went into slavery.  It’s only when moral philosophy comes on the scene and people begin to think you should be able to give some plausible reasons for your conduct, that doing this bad thing to some people (i.e. making them slaves) had to be justified as more fair, more just than simply the bad breaks of fortune.  (By the way, this claim has obvious and direct relevance to Daniel’s argument that human rights talk enables human rights’ abuses.)  It is moralism, in other words, that calls forth moral justifications of slavery.

Given human perversity, there then arise apologists of slavery who are even willing to argue that slavery is a good thing for the people who are enslaved.  But that’s a pretty desperate line of defense, one that really strains credulity.  In any case, it’s not an argument that wins the day.  Rather, the relevant shift in sensibility is one that gradually extends the circle of those who should not be enslaved.  The process is, basically, I would not want myself or those I care about to be enslaved; on what basis, do I think it OK that these others be enslaved?  It comes to seem unjust, unacceptable, unjustifiable that I have a good (non-enslavement) that is denied to others.  It is that “coming to seem unjust” that is the change in sensibility.

I think the parallel to animal rights is pretty exact.  The human fear of being eaten seems pretty fundamental.  Cannibalism is the stuff of horror stories the world round.  It would take a change in sensibility to include animals in the category of things we find it repulsive (morally or, even at some point, physically) to eat.  But clearly that change in sensibility is underway.  There are many more vegetarians now than there were fifty years ago. (Just like there were slaves in New York in 1740 but not in 1800.)  Will progress along these lines continue?  That’s uncertain. Nothing assures that such changes in sensibility only head in one direction–or that some kind of consensus will be reached at some point.

An interesting side question:  we already think it repulsive to eat dogs, and that seems to be connected to all kinds of other feelings about and practices toward dogs.  The way we behave toward dogs is markedly different than the way we behave toward animals that we do eat (pigs, cows, etc.)  So, if we begin from shifting our practices about eating animals, I would expect that this shift in sensibility would also begin to register itself in other ways we behave toward animals.

So I fully admit that I haven’t a moral leg to stand on when it comes to eating animals.  All I have is an ingrained practice (habit) that I have trouble relinquishing–and a moral sensibility not yet deeply troubled enough about eating animals to make me do the hard work of reforming my practices.

But I certainly didn’t mean to suggest that the fact of a practice is a justification of that practice.  I only meant to say that revising practices is very hard work, that reasons for revisions are good to have but never decisive, and that the kinds of conviction that do lead to change are better described as shifts in sensibility than as intellectual conversions.

Standard
Daniel Hayes

999 Out Of 1000

First off, a warning: this will be long. I tried to think of how to break it up, but…

I want to revisit the question of humans and animals. I think it’s an interesting moral issue—what obligations we have toward them. John isn’t particularly interested in animals, at least in comparison to the overwhelming problems of human begins, but that doesn’t quite erase the question of whether or not he has obligations toward animals. I’m also interested in this question because I think it touches on some of the issues we’ve been talking about, particularly John’s brand of pragmatic ethics, his aversion to “theory,” this business of things going “all the way down,” as John puts it, and his occasional indifference toward attempts to explain, justify, or ask questions about “origin” (all of this the opposite of simply accepting that we’ve reached the ground).

My first point is that John is interested in his obligations toward animals—or at least he has a few in his own mind. Everyone does. Everyone thinks there are things that shouldn’t be done to animals, things that you might do to a toaster oven. Apparently, animals are thought to deserve some moral consideration—always have. But why? Kant famously asserted that treatment of animals is of moral consequence because of what it says about us and about how we treat other human beings (the idea being that animals are a training ground, especially for children). But that doesn’t seem right.

We are, I think, very mixed-up moralists when it comes to animals. And when we are mixed up, we tend to dodge (especially ourselves). This is where Cavell’s idea of deflection comes into play. (It’s an idea developed more by the philosopher Cora Diamond, particularly in reference to animals.) Instead of dealing with the moral difficulty of our relations with animals, going wherever our investigations might lead us, we either avoid the issue altogether or deal with it as though we were philosophical mathematicians (this is to somewhat characterize Peter Singer’s approach).

In any case, here’s John, in a previous post, struggling (not quite enough) with the issue:

“I tend to go Wittgensteinian at this point. Users of the language don’t have significant disagreements about what to designate human and what to not put in that bin. Put a dog and a human in front of any speaker and 999 times out of 1000, they will all identify the human as human and the dog as non-human….The point of invoking Wittgenstein is to say that as a matter of everyday practice, language users do not exhibit much difficulty in assigning some creatures to the category human as contrasted to other creatures–and that philosophers may not have any supporting reasons or evidence to provide that will somehow make this everyday practice more secure or more logical or more comprehensible. What makes us think we need something ‘underneath’ the practice to explain or secure it? That’s the ‘theory’ temptation that I see pragmatism as trying to resist. The Wittgenstein move is to say that the practice itself is all the ground we are going to get. There is nothing else that is going to explain or ground it.”

In other words, we make the distinction between ourselves and the other animals because…well, because we do. We could talk about the origins of the practice, what’s “underneath” the practice of making this distinction, but why would we do that? What would that gain us? We could try to explain it, or justify it, but does a “practice” really lend itself to this kind of inquiry? After all, it’s pretty simple: we put things in categories, and these categories then take on differing amounts of moral significance. Humans go here, in container X; and animals go there, in container Y. And we’ve decided also that things in the one container X are of greater moral consideration than things in container Y. It’s akin to the difference between literary fiction and romance novels—the one matters a great deal more than the other. No, God hasn’t decreed these distinctions, but we honor them nonetheless.

I’m struggling here, in part because I think this is a pretty weak argument. That a person might, on 999 out of 1000 occasions, think that a human is not a dog—of what significance is this fact? Is John suggesting, in a different way, that 999 out of 1000 people think that a dog is of less moral consideration (much less!) than a human being? Would this in itself be worthy of note? Isn’t this simply a way of describing what is? And, more scarily, isn’t this a way of saying that what is is for good reason because it is? Does John really want to be saying this?

I don’t think he does. Nor do I think he wants to be describing Wittgenstein’s position in quite this way. Yes, Wittgenstein said, “Philosophy may in no way interfere with the actual use of language; it can in the end only describe it….It leaves everything as it is.” And yes, he also said, “We must do away with all explanation, and description alone must take its place.” But this still leaves the question of what to do in cases of moral dilemma, when an issue comes to the fore and demands action. In other words, how do we talk about what’s morally hard to talk about, given the context of the language (including concepts, ideas, etc.) of our time? Was Wittgenstein really suggesting that we simply accept what we do because it’s what we do? Certainly there are scholars of Wittgenstein who see him as a conservative (i.e., tradition wins out); but it seems strange, given his liberal leanings, that John would endorse such a view.

Let me take a stab at this. For Wittgenstein, like all of our good friends, there is no God’s-eye view of things with which we might evaluate ethical obligations we have toward animals. There are no foundational structures of any kind. What we have, instead, is historically and culturally norms and practices; and we also have contingent moral discourse, which we use as a tool with which to grapple with moral and political questions. Or, put differently, there are language games within language games. One of those language games involves the way we talk and think about animals when we sit down for dinner; another of those language games involves how we talk about our ethical obligations, and what basic standards and criteria we use when we talk like this. The idea that dogs and humans have next to nothing to do with one another, ethically or otherwise—this is a normative standard. That such an idea, within a Wittgensteinian world, might one day seem ridiculous is entirely possible; and if this were to happen, no doubt it would be because this mish-mash of different practices (not just dinner, but dinner-time conversations as well) would take on a different configuration. Happens all the time.

John says, “The Wittgenstein move is to say that the practice itself is all the ground we are going to get.” Here, again, is this notion of ground, and how it is only the essentialists (including now, it seems, deconstructionists!) won’t accept this humbling situation. So, given the context (“the point of invoking Wittgenstein”), the ground, if I understand, is about how dogs and human are different and not worthy of the same ethical consideration. (I know that ground, you know that ground, and ground seems the right word.) As John puts it, “There is nothing else that is going to explain or ground” this practice.

I assume he means, ground it further. In other words, to use a different analogy, we’ve come to the end of the road. But is that true? The idea (dogs, humans, entirely different) is obviously grounded, if you wish to use that word, in an entirely different set of ideas. (And then those ideas are, too, grounded in a different set of ideas.) Or, if you want, substitute practice for ideas—I don’t think it matters. John wants to say that we’ve reached the ground floor, as it were; but I think he’s stuck on the idea that it’s an up-and-down affair (an elevator!) when it’s much messier and more interesting than that. The practice of slavery was, one day, a given—the ground on which Americans walked. But that ground gave way, and it gave way in part because people measured the explanations for both the practice’s continuing existence and its abolition. (The scaffolding of slavery, based on an interweaving set of practices, turned out to be weak.) At their best, those interested in the question of slavery scratched the surface of that ground, not to figure out what was true, in some essentialist dream, but to see what might be.

None of this is to suggest that animals are due more moral consideration. But it seems wrong to say (1) that the very question shouldn’t be asked, and (2) that such a moral investigation shouldn’t involve what explanations are given to bolster or enable the practice. John wishes to stop the process at an arbitrary point—here, he says, we’ve reached ground, because we’ve described the practice, the humans-and-animals game we play. But what is asking questions about a practice but another practice? Are practices really so clearly demarcated that we can accuse those who want to get to bottom of things, to understand the genealogy of a given practice, of being essentialists?

If Wittgenstein says that there is no explaining or justifying our customs and what we do, isn’t he similarly saying that our customs are not justified in themselves? (In other words, what’s truly interesting and revolutionary about this approach is that the absence of explanation or justification is no reason to stop explaining and justifying, as long as you realize that these processes are ongoing, never-ending, without foundation. In that sense, the agreements we make (a good way of describing politics) are both given and decided, and it’s this oscillating pattern of practices (that define and elicit agreements) that tell us who we are and who we want to be. To say that a practice is governed by rules, as Wittgenstein might, isn’t to say very much; nor does it help to say that those rules have no explanatory or justificatory power. After all, we are forced daily to agree or disagree to rules—and that’s where any concept of justice is to be found.

On a more practical note, check out this detailed investigation in the New York Times: http://www.nytimes.com/2015/01/20/dining/animal-welfare-at-risk-in-experiments-for-meat-industry.html?ref=dining.

Yes, go ahead: read it and cringe. But after you cringe, think of the practice involved—that is, the use of animals as a commodity, as a food source with economic preconditions—and all of it makes perfect sense. After all, this is what we do. Given the firm distinctions we make between animals and humans—a kind of language game we play every day, especially when we sit down to eat—why would it surprise us to learn that so many of us are capable of doing these things (or knowing that these things are done) with a clean conscience?

Standard
John McGowan

Polytheism

I am reading Oedipus Rex with my students this week, which reminds me that the Greeks were polytheists.  They didn’t believe sovereignty was undivided.  Zeus is more powerful than the other gods, but he is not all-powerful.  William James came to believe that monism as contrasted to pluralism was the most consequential divide within philosophical thought.  (James, by the way, introduced the term “pluralism” into philosophy.  Prior to his usage, the term referred to Anglican ministers who held multiple “livings.”  More theological origins.)  The deconstructionsists are monists.  They keep talking about what things are “essentially” and they keep thinking that origins are determinative, inescapable, a concept or an institution’s fate.  The pragmatists are trying to think through what it would mean to be pluralist to the core, all the way down, with differences and deviations and varieties at the bottom, not essences.  To be, in short, polytheists if theists we must be.  But, even better, to be atheists.

Standard
John McGowan

Torture and Human Rights

I love Daniel’s parable about God in his heaven watching humans torture.  And I share his desire for a better strategy (expedient? practice?) than humanism and human rights, one that would have better results.  And I think the two of us should take up the question of political change–top-down? change of heart? etc.–in subsequent posts.  But, right now, I just want to post some words from James Madison that I find just about the most sensible thing ever said on the question of rights.  Madison, basically, admits that rights can be horribly ineffective, but he still thinks they have a potential to do some good, so are worth establishing.  Daniel, of course, has argued that rights can do positive harm, can enable torture, and I still need to address that worry more fully in a future post.  But, for now, let’s give Madison his moment on the stage. (In relation to Madison’s words here, it is melancholy to note that most polls show a majority of Americans feel OK about torture.)  I have taken the following paragraph and its footnote from Chapter Four of my Pragmatist Politics (2012).

James Madison, in a letter to Thomas Jefferson, offers a very clear assessment of what a constitutional “bill of rights” can and cannot do. Madison is responding to the critics who objected to the absence of a bill of rights in the proposed Constitution (produced by the 1878 constitutional convention in Philadelphia).

My own opinion has always been in favor of a bill of rights; . . . At the same time I have never thought the omission a material defect, nor been anxious to supply it even by subsequent amendment, for any other reason than that it is anxiouslydesired by others. . . . Experience proves the inefficacy of a bill on those occasions when its control is most needed. Repeated violations of these parchment barriers have been committed by overbearing majorities in every State. . . . Wherever thereal power in a Government lies, there is the danger of oppression. In our Government the real power lies in the majority of the Community, and the invasion of private rights is chiefly to be apprehended not from acts of Government contrary to the sense of its constituents, but from acts in which the Government is the mere instrument of the major number of its Constituents. . . . Wherever there is an interest and power to do wrong, wrong will generally be done, and not less readily by a powerful and interested party than by a powerful and interested prince. . . . What use then it may be asked can a billor rights serve in a popular Government? I answer: 1. The political truths declared in that solemn manner acquire by degrees the character of fundamental maxims of free Government, and as they become incorporated with the national sentiment, counteract the impulses of interest and passion. 2. Although it be generally true that the danger of oppression lies in the interested majority rather than in the usurped acts of Government, yet there may be occasions in which such an evil may spring from the latter source; and, on such, a bill of rights will be a good ground of appeal to the sense of the community. . . . It is a melancholy reflection that liberty should be equally exposed to danger whether the Government have too much or too little power, and that the line which divides these extremes should be so inaccurately defined by experience.[i]

[i]Letter of James Madison to Thomas Jefferson, October 17, 1788. Quoted from Something that Will Surprise the World: The Essential Writings of the Founding Fathers, ed. by Susan Dunn (New York: Basic Books, 2006), 389. For a detailed history of how the US Bill of Rights was created—and of the leading role Madison played in its construction and its passage—see Richard Labunski, James Madison and the Struggle for the Bill of Rights (New York: Oxford University Press, 2006).

Standard
John McGowan

Secularism and Sovereignty

So we have read this essay by Michael Naas and Daniel is right that I have lots of objections and would go about trying to think about these issues in a very different way.

But I am tired (at the moment at least) of arguing against the intellectual strategies of deconstruction. Instead, I want to think positively in this post, not criticizing Naas or Derrida, but thinking along with them about issues they bring to the fore.  (May I add that Naas writes beautifully, with an obvious desire to clarify matters for his readers, a strong contrast to Hamacher.)

The key issue is sovereignty: what could legitimate power or authority, where power and/or authority are understood as standing “over” people, of commanding their obedience?  In a secular world this problem becomes: what could give any human the standing to have power over another human?  If the sovereign (i.e. God) is a higher being, then its authority seems self-evident.  (We tend to think the same way about parents.)  But once you eschew hierarchy in favor of the assertion (one example of the kind of founding faith-statement that Naas wants to highlight) that all humans are created equal, then where could a legitimate authority come from?

The traditional Enlightenment answer, from Hobbes through to Kant, was an origin story about a social contract.  The only legitimate sovereignty is one that a group of humans creates for itself in order to secure certain basic goods (security prime among them). For all kinds of reasons, this is not a very satisfactory story.  Perhaps its biggest problem is that none of us was there at the origins, but we are still subject to that power that was established the.

In this modern, liberal (in the classical sense of that term), establishment of authority, there is always the attempt (not so much in Hobbes, but in all the others from Locke on down) to limit the power of the authority.  That’s what a Bill of Rights does, that’s what a separation of powers arrangement does, that’s what checks and balances do, that’s what federalism does.  So when Naas writes: “sovereignty is, in essence, always indivisible, unshareable, and unlimited” and then associates deconstruction with “every attempt to think or put into practice a division, sharing, or limitation of sovereign power” (26), he seems (to me) to be aligning deconstruction with the liberal project.  I object (of course) to saying that sovereignty is “in essence” the bad kind and not the divisible, sharing kind.  What gives the unshareable and unlimited version priority, a “privilege” to use Derridean language?  But, putting that methodological quibble aside, the question of how to prevent power’s accumulation in one locale, in one set of hands, is central to every effort to create a non-tyrannical political order.  And every contribution deconstruction can make to that attempt is to be heartily welcomed.

The second issue, for now, is how this project of dividing up sovereignty is connected to secularism.  One obvious answer is that secularism de-centers the power of the Church, while distributing citizens across a wide range of faiths.  If not every citizen believes in the same god and some believe in no god at all, then sovereignty no longer resides simply and indivisibly with the Pope or the Archbishop of Canterbury or God or Allah.  That hardly means that a tendency to look for and install an indivisible sovereign has been banished from the world.  My country right or wrong is a familiar substitute for the lost God.  That’s why cosmopolitanism is an important idea (and maybe one Daniel and I should also turn our attention to.)

Predictably, I don’t quite get what Naas thinks Derrida is doing beyond our everyday working notions of cosmopolitanism.  Of rights, cosmopolitanism, and other Enlightenment notions Naas tells us “they must be clarified, supported, and expanded at the same time as their theological origins are questioned” (30-31).  That our ideas and practices should always be questioned, always examined to see if they are functioning well, is a recommendation that it is hard to imagine anyone disputing.  But how, exactly, questioning theological origins will change our relation to some basic commitments is vague in Naas’s essay.  The closest he comes to an answer is to say that we’d get a secularism that was non-dogmatic.  Again, a good idea.  Looks like Peirce’s fallibilism or Rorty’s liberal irony.  But still I’d like to try to imagine it played out in the world.

I am tolerant of people who believe in God.  I don’t think that my happiness or quality of life is threatened by their existence and I am content to let them be.  I do not endorse a sovereign that would insist on uniformity of belief.  But I am dogmatic in my atheism to the extent that I firmly believe that God does not exist.  What would my atheism look like if it were less dogmatic?  And that atheism slides into secularism when I say that the polity cannot command that I believe in God.  That, too, is a dogmatic position.  What would it mean to be secular in that sense but not so dogmatic about it?

Standard
Daniel Hayes

Smells Like Dogma

I read Michael Naas’s “Derrida’s Laïcité” with some trepidation. I suppose I was reading through John’s eyes, to some extent, and imagining his response. All this sniffing, this uncovering, this emphasis on what’s “unavowed” (a religious way of putting it, no?), this requirement that everything be “submitted to critique.” Plus, it doesn’t help a reader to puzzle needlessly over words and concepts, not knowing what to make of, for example, “the very experience of nonrelation and of absolute interruption.” Yes, yes, I know. But I also wanted to learn something by reading the article, and I suggested it as a text because I thought it raised a number of interesting questions. Here’s a few that I came up with.

1. Naas wants to make the case “for an originary or radical secularity that includes a critique or questioning of religious dogma by means of a more primordial or originary faith.” Okay. In deconstructive language, Naas describes the intellectual and political mission as recognizing “the imperative to submit to critique and to clarify the hidden and often overlooked relationship between the political and its theological origins.” In other words, it turns out—says Derrida—that “the theological” is alive and well in contemporary political structures. If so, then why is this important? (One possible answer is that it requires us to change the common narrative: for long time, the theological influence on everyday life, both personal and political, was enormous; then the Enlightenment occurred, even if in historical fits and starts, and gradually the theological dwindled in importance and was replaced by reason and/or non-theological sources of thought; and here we are today, still on board with the Enlightenment, waiting for religion to go the way of the horse-and-buggy, even if we like to think of ourselves as being more enlightened about the limits of reason.)

2. Is Naas and Derrida right in saying that we are not quite seeing things clearly—that is, blind to the fact that liberalism and democracy constitute its own theological entity, competing with others (even as we like to think we are somehow beyond religion, beyond that conflict). Is there something fishy about our definitions of “religious wars,” our sense that they are no longer necessary? Of course we know that these wars are not over—since many religious wars, often “barbaric,” go on within the borders of individual countries. But are we, too, involved in wars of this kind, or are we simply attempting to get rid of the need for such wars?

3. Is there “sovereignty” (with an in-built exceptionality) without somehow a sovereign god? Is there always a theological filiation to sovereignty? And does it matter? Perhaps it’s (a) not possible to organize human beings without the idea of sovereignty, and not possible to (b) strip sovereignty of its theological filiation. If those two things are true, then it’s instructive to know about the theological underpinnings of liberalism, democracy, or just about any enlightened form of political organization, but at some point you might want to say, “So what?” (Or at least I can hear John saying that.)

4. This, at least in a theoretical way, sounds interesting: the necessity of something godlike, but it requires, writes Naas, “a rethinking of such a god in terms of everything a sovereign god must not be—that is, a s a god who is vulnerable, divisible, powerless, and so on—in short, a god who has undergone deconstruction, a radically secular god, if you will.” This returns to the issue of vulnerability and Cavell’s ideas about acknowledgment. But is this a good avenue for further thought, or is it just mumbo-jumbo?

5. Is this notion of a “secular god” synonymous with practitioners of post-secular thinking and their “return of religion”? Or is there a difference (in Naas and Derrida)? Furthermore, is there anything to be gained by this idea of a “secular god”? Does the concept help us in any way, either in terms of understanding ourselves or figuring out how to act politically?

6. For Naas, a better way of thinking “would not simply purify the state of all faith but seek out the original faith or originary engagement at the origin of both the state and religion.” Okay, but what is this original faith, what does it exactly mean? (Naas even speaks of “this unengenderable God, that is, God as already there, even before being.” Wow!) In my reading, Naas’s idea of the requirement of faith seems connected to the idea of any statement of political agenda having the form of “Believe what I say as one believes in a miracle.” I find this an intriguing idea. I imagine John will hate this, since he wishes, as a good pragmatist, to think it’s entirely possible to believe something, to speak of its “truth,” without recourse to anything as otherworldly as miracles. I sympathize, of course, but I also wonder sometimes whether this is how the human world works, both linguistically and in practice: we can’t stop ourselves from referring to a world other than our own. Is this a silly thought?

7. In one sense, Derrida and Naas want to drain secularism of its pretentions. (Shouldn’t we applaud them, at least for their efforts?) As Naas suggests, Derrida is promoting a secular idea “without theological dogma but also without the dogmatism of secularism.” Okay. But what’s wrong with dogma? This seems close to the question we’ve been asking since the beginning—trying to locate the villain, as it were. Dogma? Religion? Religious dogma? Religious dogma is a tried-and-true enemy, but now we’re asking whether it’s of any significance that secularism might itself be a dogma. (Smells like dogma, so asserts Derrida, our sniffer dog.) If secularism is dogmatic (in the religious sense), is dogma really the culprit, or is the entire category of the theological the real problem? I’m confused, but John is going to help out.

Standard
Daniel Hayes

You, Me, God, and The Little Ones

Some of what bothers me about political discussions is the assumption of a top-down approach. Think of it as a confusion over what ethics means, or the very different connotations that that word carries. On the one hand, we think of ethics as something that dictates our decisions in our personal lives. We make choices every day, and we like to think that these choices are ethically informed, even if we disagree about the particulars. On the other hand, we exist in public arenas that call for the application of ethics to political structures, decisions, legislative possibilities, and so on. But often our political discussions seem to have nothing much to do with the ethics we employ in what we might call our daily lives.

For one, according to the divide between private and public realms that liberalism endorses (whether liked, by Richard Rorty, or disliked, by Wendy Brown), we have our own personal viewpoints on religion and metaphysical questions, but are these relevant to our political concerns? Not really, or only through some backdoor that resists the liberal narrative. (And as John says, perhaps there’s nothing to be gained by “getting into the weeds with the theologians.”) The kinds of things we might think about—does God exist? what does it mean to be “human”? should my ethics be based on absolutes? am I obliged to other people?—don’t really matter in the political arena, or they shouldn’t. What matters, from a pragmatic liberal sensibility, are solutions to political problems, and those solutions come from tools that we manufacture to take care of large problems and the large numbers of people affected by them.

I think this top-down idea is a mistake. In part, it’s a mistake because it doesn’t really honor the way that political change happens. (And change is crucial. To pragmatists, because you always want to honor its possibility; you want to be light on your feet, ready and willing to reconsider what today passes as “solution.” From an Emersonian point of view, change isn’t so much a possibility as a requirement of personal and political life.) How, for instance, did slavery once exist in the United States and then not exist? How did this change come about? Was it from the employment of a top-down approach? Surely there were elements of this approach—laws, statutes, the political machinery of government; but surely there was also something else, something more difficult perhaps to identify and replicate: a change of mind that occurred on a number of different fronts. It’s very hard to see how slavery was abolished with a keen eye kept on the distinction between the personal and the political—as though all those people back then were jabbering about political solutions and not their personal and religious beliefs about the relative “worth” of black folk.

Anyway, having said what I’ve said, I now want to indulge you in a top-down experiment. Very top-down. Imagine yourself as God. You look down on the earth, with its many little ones of the erect and linguistic type. These little ones seem to organize themselves in various ways—into families, tribes, nations, states, coalitions, and religious groups. (In a nostalgic gesture, some of the little ones even worship you!) Despite these group identifications, they get along with each other—that is, until they don’t. And when they don’t get along, they do bad things to each another, and one of these things is torture. It breaks your heart to see it happening. So sad—the good of punishment taken too far.

Apparently, the little ones torture even when they say they want to stop torturing. It’s a problem. In order to prevent torture, they’ve come up with the idea of “the human” (a word they often apply to one another). Apparently, all humans have rights, and one of these rights is the right not to be tortured.  For a little one to torture another little one is considered a violation of human rights, as though the person weren’t a human but a dog or a tree. This is an idea, which the little ones have subsequently and impressively institutionalized, in order to stop various kinds of behavior, including torture. In short, it’s a solution to a problem. And the idea of being human, and having rights because you are human, is a very popular solution amongst the little ones.

But what seems odd—at least to you, gazing down on things—is how the little ones only consider each other “human” until they don’t; and often it’s just in those situations where they might torture each other when their doubts arise as to whether someone is human or not. (Apparently, it’s okay to torture non-humans, or at least then a human right isn’t being violated.) And so you sometimes find yourself wondering—okay, you admit it, in a paternal way—if this is the best method for the little ones to deal with the question of torture (or any other of their concerns). Maybe they have their hearts in the right place, but this idea of “the human” and inherent rights seems flimsy and too easily manipulated to have staying power as a way of lessening the practice of torture.

And yes, you do realize that your influence waned a long time ago, and so it’s no use doing what you really want to do—pull out the old bullhorn, tell the little fuckers not to torture anyone or you’ll send them to an eternal existence in a fiery hell. (Apparently, your days of sending shivers up the spines of the little ones are over. And you’re big enough to acknowledge this.) But still, at least in your benevolent moments, you do wish they’d come up with something better than this business of “the human.” It seems to be such a poor substitute for your own injunctions. Though, in your vain moments, you also appreciate the tip of the hat—the way the little one have taken their so-called dignity, misrepresented it as inherently their own, and assumed its relevance in keeping order.

Standard
John McGowan

Two Quick Thoughts

Two quick things to say in response to some of Daniel’s recent posts.

1.  Paraphrasing Daniel: If many (most likely a majority) of people who appeal to the notion of human rights also think that the very coherence and plausibility of “human rights” depends on some kind of metaphysical foundation, then how can the pragmatist just dismiss those metaphysical arguments/appeals?  Here’s the quick thought: this takes both Daniel and me back to the very start of this long-running conversation. A majority of Americans (at least) and probably of human beings on the planet believe in God.  Daniel and I reject that belief.  How much respect do we have to give that belief?  And how much breath do we have to spend arguing against it?  The Rorty claim is that arguing against theists is not likely to prove productive.  But Rorty, and I have followed him in this, will give a general account of what he deems his alternative, non-theist position.  However, Rorty will refuse to get into the weeds with the theologians, working through (and attempting to refute?) all of the elaborate ways they construe what it means to believe in God, and who God is, and how humans should think about God.  That’s my position vis a vis the deconstruction folks.  I explain roughly  why I want to pursue a different intellectual tack, and then go on to pursue that tack, instead of reading each new variant of deconstruction that comes down the pike in order to show why I don’t find it persuasive.

2. Daniel’s most recent post poses a serious challenge: what if human rights language actually enables the violation of human rights (i.e. enables torture)?  That’s a substantive question, to which I must give some real thought, because Daniel is absolutely right that I want to believe (assert?) that the gap between our ideals and our actual behavior can be exploited to improve the behavior and to hold the torturers to account.  But what if, as he suggests, the statement of the ideals does not function to highlight our shortcomings but instead to mask or to justify them?  Masking, I assume, would be something like hypocrisy, whereas justifying would be more like enabling.  I take it that the second possibility is the one that troubles Daniel most–and well it should.  So I want to mull this over.

Standard
Daniel Hayes

Good, But Good Enough?

John makes a few claims in his last big blog entry, and here are three of them: (1) that “to blame the prescription for the violations of it is to locate the cause of trouble in the wrong place”; (2) that his opponents argue that rights are a wrongminded waste of time if (a) “rights are ever violated” or (b) “the vocabulary of rights is ever used to excuse and justify oppression” (my emphasis), and (3) that Hamacher, and many of his ilk, deal too much in “generalities [that] never engage with difficulties of specifics.”

I agree with #3, but I don’t agree with #1 and #2. Since I, too, wish to move the discussion to specifics, I want to center on the issue of torture. It’s currently relevant, and in prescriptions against it there is almost always the idea of torture as somehow violating human rights. We have here—in terms of the procedure of torture—a good case of “behavior” (John’s term) that we wish to counter. (I’m assuming an ethical desire to curtail torture, in all its aspects.)

John wishes to ask whether a certain tool—the application of “human rights” as an institutional and legal criteria—is useful, and he suggests that this is a question of whether it, the tool, “does more harm than good.” I think that isn’t the question, since the desire is to stop torture. And no, I’m not suggesting that measures that produce anything short of a total absence of torture are insufficient. But I am arguing that “does more harm than good” suggests that we have the choice to employ one single tool or not employ it. The right criteria involves asking the question, “Is this the best way of ending torture, even if we don’t completely end the practice torture?” Or, put differently, is there a better way?

Anyway, back to torture and our efforts to stop this behavior. John likens this, in a parallel case, to whether the prescription of “do not kill” is bad if sometimes people do kill (that is, the prescription doesn’t prevent its own violation). But John is here playing with apples and oranges. The prescription of “do not kill” is not analogous to “do not violate the human rights of another person by torturing them”; it is analogous to “do not torture another person.” And no one is blaming the prescription; they are, instead, questioning the application of a way of ensuring or achieving, as best as possible, the outcome desired by making the prescription in the first place. (In John’s example, this would involve methods of stopping people from killing other people.)

I think John has a tendency to think of “do not torture” and “do not violate the human right not to be tortured” as equivalent. I realize that this is the liberal point of view (denying the difference), but it doesn’t really work in terms of logic. Instead, it’s obviously an argument about whether the difference in the two statements amounts to anything.

In other words, John and I both want to end torture. We agree on the prescription. And now we’re talking about the application of methods. One of those methods is the rhetoric, and institutionalization, of “human rights.” And John is right in saying that it would be entirely unfair to assess the utility of the rhetoric of “human rights” according to whether the prescription is violated. Any method of ensuring the relative success of a prescription is going to fail in this sense. But it seems disingenuous of John to suggest that opponents of the rhetoric of human rights think that any violation shoots down the whole likelihood that human rights is a good model for getting things done.

Who in the world suggests that human rights, as a tool to oppose oppression, mean nothing if “the vocabulary of rights is ever used to excuse and justify oppression”? And I think the reason why John is using this extreme language—ever—is to sidestep an important distinction between individual cases and systematic problems. This isn’t to deny that there might be some who would attempt to use deconstructive mumbo-jumbo to ensure that systematic problems always exist; but there are plenty of people who think—much more simply, much more mundanely—that there is a strong enough connection between individual cases of “violation” and something, within the overall system, that might lead to these individual cases. They may be wrong about that, but it’s not quite the either/or situation that both John and his dreaded deconstructive opponents seem to thrive on.

In other words, maybe “rights” rhetoric is good but not, in John’s Winnicottian sense, “good enough.” Could that be possible?

Take America, for instance. Americans are big on human rights, no? And supposedly the difference between us and our opponents is that we believe in human rights, endorse them, enforce them, and so on. And so you would think that we might oppose torture. Why? Because torture is wrong? Well, that’s a nice sentiment, but it doesn’t have any bite to it. And so we say that torture is contrary to human rights. But in fact we use torture. And we particularly use it when we meet up with the kind of people who don’t endorse the idea of human rights. There is a sense, perhaps, that the rhetoric and even the application of “human rights” is merely convenient. In the worse cases, human rights are applied in the case of other countries but not in our own. In the slightly-less-than-awful cases (of which Obama is our current author, or at least the lead author), we fail to enforce the violations of human rights when that enforcement has us looking uncomfortably in the mirror.

In other words, we have it both ways: we torture, and we support human rights; we send drones to kill people who we might otherwise capture and then torture, and we actually use the word humanitarian in order to justify our new methods. Or we have interesting discussions about whether or not torture is effective, as though effectiveness is a good criteria for whether or not we should uphold human rights. How did this come to be? Why does this hypocrisy, or whatever you wish to call, seem so intractable?

John might say that we are simply not living up to our ideals. We violate our own laws, our own ethics, but this is not necessarily a reason to wonder whether there’s anything wrong with the overall legal and ethical viewpoint. And he’s right, of course: a law against torture is a good thing, whether or not it’s ever violated (even whether or not it’s always enforced). But here, in speaking of “human rights,” we’re not talking about laws. And so if we are not living up to our ideals, it’s not simply a matter of not always following the law, but of the application of a moral vocabulary that seems very often to have little to do with our actual behavior. In fact, it may be that the vocabulary is a kind of shield against true questioning of our behavior. (We only torture when we have to, when our way of life—which is based on reverence for human rights—becomes endangered.)

To be clear: there’s always going to be a gap between ideals and reality, between a moral vocabulary and the good work that you hope that vocabulary contributes to. But it also seems fair to question that moral vocabulary if that gap, or disconnect, appears excessive. It seems fair to at least entertain the claim that the gap between the moral vocabulary of rights and the ongoing existence of American torture isn’t excessive. But it also seems fair to question whether a series of individual violations—perhaps the ones documented in the recent Senate report—bespeak a larger problem that might have us, if only in our imaginations, coming up with a new moral vocabulary that worked better in altering our behavior.

I said before that the important question is the following: Is there a better way? (Not whether human rights causes more good than harm.) I haven’t here specified a better way, though I also think that you don’t get to the better way (and the kind of imaginative thinking that that might entail) without first making sense of what’s wrong with the current way of thinking. And this is what bothers me about John’s approach. He seems to be saying, “Don’t tell me what’s wrong with something until you can give me a better alternative; otherwise, you’re just ‘taking credit for a holiness’.” Isn’t there another possibility? Wouldn’t it a good idea to question—particularly with the wayward drift of America—whether current notions of the good are good enough?

The weakness of these thoughts, my thoughts, is that I haven’t gotten to the topic of metaphysical objections to humanism and, by extension, human rights. (This is what John is mostly troubled by, I think: the sanctimonious idea of rights rhetoric being suspect due to its harboring foundational ideas.) I won’t try here, but I think this involves a step in making sense of what’s not working in the concept of rights, or attempting to figure out what allows the United States (besides simple notions of hypocrisy) to require of others what it doesn’t require of itself.

Standard
John McGowan

Getting Back to Dialogue

As Daniel says, we’ve been talking past one another, in large part because I felt the need to play out my full argument against the Hamacher line of thought.

But we need to get back to engaging one another.  So this post is just a short to do list for me.  And then I’d like Daniel to offer his own list of topics that are out there for us both to engage.

1. The empathetic Cavell versus the smug Rorty.  I find this characterization completely convincing.  So it should push me to think about how Cavell’s taking the skeptic seriously (and hence dwelling on epistemological issues) leads to a more attractive authorial persona than Rorty’s appeals to “solidarity” even though (arguably) both Cavell and Rorty share the same politics.  (It is not irrelevant to note, in this context, that Cavell was lead, very late in his career, to consider the issue of animal rights, a topic that Rorty never took up.  I do wonder, if Rorty had lived long enough, if he would have felt the need to address this issue.)

2.  What’s philosophy good for if it is not metaphysical or epistemological?  Daniel raises this question in his second post on Flanagan.  I have various thoughts on that topic.

3.  Finally, and I will admit to my mind most importantly, is Daniel’s list of the commitments that characterize humanism.  Since I have been preaching paying attention to the specifics, that list is the invitation to do just that.  I want to take the time to examine that list and a) see where and to what extent I endorse each item on it and b) ponder whether that are additions I would make to the list.

Standard
John McGowan

Nothing Outside the Flux, Part Three

I am not satisfied with part two, so may return to that territory to hone my explanation of my position.  But I want, right now, to press ahead because I will be out of town for the next week–and then comes the onslaught of the semester.  I might not get back to posting for another 2-3 weeks.

The subject of today’s post is why does work like Hamacher’s push my buttons?  Why, Daniel asks, does his line of argument make me angry?  A great question.  To which I think I have two answers.

1.  I find Hamacher’s essay (and work like it) upsetting because it has great prestige in certain corners of academia–and the academics who adopt this line of thinking understand themselves to be engaged in a kind of “radical critique” that purports to lead to the “undoing” of western metaphysics, along with an uncovering of the perfidy of a liberalism that mistakenly thinks itself a guardian of freedom and autonomy.  To state the basic argument once more: Hamacher understands himself as showing us that “rights” can never be universal, even while they claim to be universal, because built into the structure of rights is a moment of “judgment” about who is entitled to the right and who isn’t.  And he argues that this undermining of universality undoes completely the legitimacy of rights, while implying (without ever saying it straight out) that rights are tyrannical and the source of violence and coercion even as they pretend to be liberatory.)  Critique uncovers the “true” nature of rights, a nature hidden from the more naive liberal who, because no personally injured by right’s pernicious exclusions, fails to see how rights fall far short of their promised benefits.

The Hamacher line of argument seems fallacious to me. It is not a question of creating a concept of rights that is perfectly consistent or that guarantees that rights will never be violated.  There is a huge difference between establishing a prescription–do not kill–and insuring that that prescription is always obeyed.  We don’t need prescriptions for actions that no one is ever tempted to do.  To blame the prescription for the violations of it is to locate the cause of trouble in the wrong place.  The prescription is an attempt to come to terms with the behavior.  It is not a perfect attempt, but it is a “good enough” attempt if it works in some instances.  And it also is useful if it helps alert us to violations (herein lies one of the uses of post-transcendent philosophy, a topic I hope to take up in a subsequent post).

In short, we should not expect to establish either a conceptual or an institutional order that makes reprehensible human actions entirely a thing of the past.  If that is the case, then the right question to be asking is whether the notion of “rights” does more harm than it does good.  But Hamacher’s reasoning proceeds instead along the line of the counter-factual.  If he can show one instance where rights cause harm, then the whole notion of rights is undone by that failure.  He thinks this way because he thinks in generalities, not particulars.  A single counter-example proves a general claim fallacious.  But if we abandon the whole framework of generality, then this line of argument collapses.  Hammers are not proved illegitimate, harmful tools because their use to unscrew something would be counter-productive.  Hammers are not even proved illegitimate if someone uses them to commit violence against another person.  Hammers can be used in good or bad ways–and that’s a fact tied to the specific circumstances of when they are put to use and how they are used and what is the aim of their use.  There is nothing beyond that, nothing interesting or relevant or significant to say about hammers generally.

But why do I find the addiction to this line of thinking so annoying.  Because of its prestige partly.  Because it and its practitioners have proved so impervious to any criticism of their way of thinking.  Because its practitioners take a condescending attitude toward those they deem non-theoretical and non-radical.  Because this way of thinking presents itself as profound and foundational, uncovering the deep logic of our concepts and our world, when in fact it is superficial, formulaic, and lazy.  Superficial because its generalities never engage with difficulties of specifics, formulaic because the basic move of deconstruction is always the same and can be taught to any reasonably intelligent undergrad in two to three class meetings, and lazy because it takes terms like “liberalism” and “western metaphysics” and assumes it knows what they mean without ever actually engaging the significant differences between, say, Locke and Rawls as liberals, or Plato and Spinoza as metaphysicians.

Hamacher can dress up his essay with learned references to Plato and Benjamin, but to anyone familiar with deconstruction, once Hamacher sets up his basic emphasis on judgment, it is completely clear where the essay is headed even if the prose style obscures the reliance on the basic formula.  I have used the phrase “paradox mongering” in the past to characterize this style, which parades its profundity by embracing the counter-intuitive in a vatic prose that presents itself as revelatory.

Let me give you the formula here: You take a concept and then place it in binary opposition to its opposite.  You then show how the concept’s integrity depends on its exclusion of its opposite.  That is, the concept is defined, its boundaries delineated, by what it does not include, by what it does not name.  Then you show that the concept is, in fact, dependent on the thing it excludes, that the thing it includes is necessarily central to the concept itself, thus dissolving that concept into contradiction.

This formula is a powerful tool for reading concepts and texts.  It can often prove illuminating.  But it has no critical edge when it is applied everywhere and everywhen.  If every integral thing is actually built upon exclusion–and therefore contradictory (and deconstructive logic has never met anything it cannot deconstruct)–then what’s the pay-off of the critique?  Rights are self-contradictory, but so is everything else.  There is not a better alternative, just (this is where Derrida ends up in his work) a mystical “justice that is yet to come,” a justice that beckons us from afar because it is unachievable and inarticulable, a “radical alterity.”  Nothing in this world is ever just when measured against this other-worldly standard, so its critique all the way down, all the time.  For a philosophy that claims to attend to difference, deconstruction renders all things human the same.  All attempts to be “just” turn to ashes in its mouth.  Humans and human life are more various than that is my point.  Sometimes we get things right; sometimes we are satisfied.  Our failures do not mean our successes are meaningless.  Deconstruction seems religious in its disillusionments.  It seems to be a late 20th century version of the Victorian idea that if I die completely and utterly (no after-life, no God) then everything I ever did in my life is rendered meaningless.  Deconstruction’s version of this logic is: if rights are ever violated in one instance, or the vocabulary of rights ever used to excuse and justify oppression, then every instance of rights is suspect.

One further point along these lines.  The deconstructive discussion of justice and rights is the opposite of Berlin’s pluralism.  Deconstruction hankers after a universe in which all goods (in the plural) prove compatible, a universe in which we get to have everything working smoothly together.  But Berlin’s point is that goods conflict, that we often have to make hard choices.  Deconstruction, it seems to me, rails against these kind of difficulties, retreating to a claim that it is some kind of foundational flaw in our concepts that generates complications, imperfections, compromises, and failures, not the more mundane limitations of what we desire in relation to what we can achieve.  Again, I think deconstruction is looking for solutions and explanations in the wrong place–and it is protesting loudly against all partial expedients while holding out for a perfect world.

2.  But when I really push at the reasons for my annoyance with someone like Hamacher, I find it comes down to what I think is his dishonesty.

In one way, I could align myself with Aristophanes in The Clouds.  It is amazing what these intellectuals can find themselves claiming to believe once they are seduced by a train of thought.  I don’t think Hamacher is insincere or hypocritical.  In some way, I do not doubt, he believes what he says–and hence it is unfair to call him dishonest.  He is merely laughable, as Aristophanes laughs at Socrates.

But another part of me can’t let him off so easily.  Here I am following the pragmatists at their very origin.  One of the first things Peirce ever says–and it comes in the context of his first articulation of the “pragmatic maxim”–is that beliefs are guidelines for action.  If I am thirsty and I believe that there is lemonade in the refrigerator, I cross the room and open the refrigerator to get the lemonade.  If the lemonade is not there, I find that my belief was mistaken–and my action is futile because based on a false belief.  That belief was not efficacious–and I realize I must revise that belief, find some explanation of why I was mistaken, or else be condemned to repeat that futile action again and again.

This leads to what I think of as the “pragmatist difference principle” (to be distinguished from the Rawlsian difference principle which is an entirely different thing): to wit, purported differences in philosophical positions are only meaningful if they produce different ways of acting in the world.

And this where I do not believe Hamacher, where I accuse him of dishonesty.  He argues for his right not to have rights, but I don’t believe for a minute that he would actually give up his right to free speech, or his right to a fair trial.  His list of “demands” on page 684 makes it obvious that he wants to introduce all kinds of exceptions, all kinds of reserved rights, into his general call to abandon rights.  That page strongly suggests that his whole essay is an elaborate exercise in play-acting.  He has no intention of acting on his stated beliefs.  Ans that is what pisses me off.

It is hard to say what you truly believe in words.  All kinds of things carry us astray once we board the boat of language.  But I think writers should try to say what they believe.  I understand that the difficulties of this enterprise lead to fiction in many cases–where fiction involves using a persona or an unreliable narrator or a multiplicity of voices (thinking of Bahktin here) to get at an enunciation of what the writer believes.  Our beliefs are not always available for direct, head-on expression.  And often, before we start writing, we don’t know what we believe.  The act of writing reveals our beliefs to ourselves.  But, all that admitted, I do think some kind of “reflective equilibrium” (a Rawlsian concept that Flanagan embraces in his work) is in order.  A writer should test continually what he says on the page with how he acts in the world.  The writer should avoid, as much as possible, saying things he would not endorse by his actions.  And it is the propensity of a lot of academic radicals to spout various views that they nowhere show any evidence of taking seriously, of acting upon, that elicits my scorn.

So I will end with a specific instance from Hamacher’s essay.  Here is one of his “demands”:

“That no community and no politically constituted society has the right to isolate any of its members, whether it be in order to protect itself or in order to exert punishment. Societies are orders of adoption.  Every form of isolation, of segregation and arrest is a form of social murder.  The killing of a human being can never be legal” (684).

Why don’t I believe that Hamacher would act on this demand?  Imagine that the university where Hamacher teaches has been the scene of multiple rapes.  The person responsible for those rapes has been identified.  Would Hamacher actually recommend that this person not be isolated or, in some other way, prevented from having full and complete freedom to occupy that campus?  (We should also note another odd move in Hamacher’s thought.  He defines, by fiat, any isolation of someone as “murder” and then deploys what is usually an argument against capital punishment as an argument against any punishment effected by the means of isolation.)

This example is a perfect illustration of where thinking in absolute and general terms gets a writer like Hamacher.  If you are a Berlin pluralist, you understand that punishment is a difficult issue, among the most difficult ones that a society that holds rights dear will face.  There is no doubt that punishment is often abused, that sadistic urges find an avenue for expression by taking advantage of schemes of punishment.  But it is also true that such sadistic urges, that violence against others, takes place outside sanctioned schemes of punishment and it is very difficult to imagine responses to such violent behavior that do not entail some kind of isolation or some other form of punishment.  If Hamacher has an alternative, one he would recommend that our society act upon, good.  Let’s have it.  Current forms of punishment are hardly something that I would endorse enthusiastically.  But to just proclaim all isolation, all punishment, illegitimate is gestural politics, taking a wonderfully pure stance that is open only to someone who never actually has to act on his proclaimed position.  And its this rhetorical, gestural politics, so proud of its purity and righteousness, that makes me angry.  Because, to repeat myself, I don’t believe it for a moment.  I think Hamacher is taking credit for a holiness that he would never really put into practice.

Finally, just to be clear.  Punishment is really, really hard.  It’s a bad thing and all too often done even more badly in practice than it is bad in theory.  But the answer to punishment’s difficulties is to get down and dirty with the actual cases, to think hard in every instance what is the most appropriate response in this case to a particular act done by a particular person.  What, in this case, will do the most good–where, as will often be the case, there are conflicting goods (for example, the good of protecting the community conflicting with the good of the person who committed the act).  Finding the best trade-offs among these goods requires fine-grained attention to the case–just the kind of judgment that someone like Kant was trying to think about in his Critique of Judgment.  In the face of such difficulties, sweeping statements about the illegitimacy of all punishment, or the perfidious nature of all acts of judgment, are less than useless.  Such statements (which is the stuff that Hamacher essays trades in) are counter-productive, distracting us from the hard work that needs to be done.

Standard
Daniel Hayes

The Elephant & The Hammer

“Rights are just human contrivances, tools created for getting some things done. And that’s why being metaphysical about rights is barking up the wrong tree.”

After writing that, John wrote his “Nothing Outside the Flux, Part Two.” I have pretty much zero disagreement to present in response to John’s ideas about rights, and his entire notion of “flux,” which is in keeping with “contrivance.” And so I agree with the first sentence of the quote above. The second sentence is what bothers me. What’s the use of weighing in on metaphysical issues if it doesn’t have anything to do with how you figure out what to keep in your toolbox? And it’s not that John is simply saying that it’s best not to go metaphysical; he’s saying that there’s a necessary connection between the idea of “human contrivance” and the absence of metaphysical discussion, even if that metaphysical discussion is antimetaphysical!

Let me present my disagreement in a provocative way. Let’s say that I don’t want to use the hammer anymore. I don’t like the hammer. It comes with too much baggage. It drives the nails in crooked. It is too effective, in some ways, and so the finished product looks bad. The hammer is too heavy in the average hand, and it’s impossible any longer to wield in an elegant way; it leads to violence, even if that’s not its stated intention. Plus, using the hammer keeps me from reaching for other tools—even new tools that may not be as efficient as the hammer (right now) but may be better down the line.

I’m not suggesting that rights are this hammer. (Though “humanism” might qualify, or American “freedom”.) But I’m trying to explain why I’m uncomfortable with what seems to be an artificial walling off of categories of dissatisfaction (with tools). Why must we limit the kinds of arguments that people can make against political tools? What if I want to say that “human rights” are contrary to God’s command? Oh, there I’d be going all metaphysical—not good. But apparently it’s okay to say that God is in favor of human rights. Or does it not concern John that 99% of the people who argue for rights are, in his words, barking up the wrong tree?

None of this is a compelling argument against rights in particular. But it underlines the problems with removing metaphysics from the discussion, when it seems pretty much the elephant in the room.

Standard
Daniel Hayes

Your Headache, Not Mine

John and I are writing past each other these days. Which is good and bad. I need to respond a little, and he does, too. But meanwhile, I’ve given some thought to a suggestion I made in my second try at dealing with Flanagan—about how there may be some blockage, or a deflection, that signals an ethical failure. I’m not quite comfortable with this idea, but I wanted to investigate it further by reading something by Stanley Cavell. Also, I’m finding the dismissal of epistemology, or what is sometimes simply referred to as metaphysics (hoping here I’m not missing an important distinction), troubles me. That is, the notion that our ideas about truth and reality and whatnot are best kept in-house, because they don’t really come into play in the political, ethical arena. This is Rorty’s point, I think, and it runs counter to Wendy Brown’s criticisms about neat liberal containers (private beliefs here, political concerns there, and tolerance…everywhere).

Anyway, here I go in another direction…

Epistemology is a door to be opened and then closed. This seems to be Rorty’s approach. Closed, because what’s revealed when opened is nothing much worth discussing. Epistemological inquiries don’t get us anywhere, and knowing that is enough. And so we move from epistemological questions to the real questions—the political ones, the ethical ones.

I can see now that Rorty appealed to me so much, when I first read Contingency, Irony, and Solidarity, because it seemed freeing to think that truth didn’t exist as something to be discovered through epistemological effort. But there’s something very dismissive in Rorty—both in his thought and his authorial manner. Doors get closed a lot. You see this in his responses to others. When asked to explain his distinction between the private and the public, for example, he says, flatly, “It is sometimes useful to remind people of a plausible distinction, without trying either to stabilize a frontier or to theorize a partition.” End of discussion.

Of course what’s crucial are Rorty’s ideas—whether they hold water or not. But I can’t read philosophers without also thinking of their writerly personalities. And so it’s somewhat jarring to go from Rorty to Cavell, who is a much more generous thinker and writer. Perhaps it had to do with what I chose to read: “Knowing and Acknowledging.” (And I got lucky in my somewhat arbitrary choice—since the essay has much to do with both epistemology and its relevance to ethical and political thought.) The conceit of his essay is to not attack skepticism—the usual course for philosophers of his stripe—but to be open to many of the good points that skeptics make. It’s only in the last few pages of the essay that this strategy pays off. And I think Cavell shows how you benefit from not closing doors prematurely.

The essay is about pain. Within philosophical history, the skeptic has long suggested—in wishing to deal with the problem of other minds—that it’s impossible to know whether another person is in pain, or whether the pain that you feel is the same as the pain that another person feels. We may think that other people have minds such as our own, and have like experiences, but we have no reason to claim that we know this. The skeptic thinks that we are separated from other people as we are from so-called reality (the example here is the tomato, and our assumption that its other side exists; to which Rorty characteristically responds, “The only people who go all existential about the invisibility of the rest of the tomato are lecturers in epistemology who relieve the classroom tedium by hype”).

Setting aside what this skepticism might imply—about our relationship to tomatoes and other people—Cavell concentrates on what it means to say, “I know you are in pain.” The skeptic thinks that this statement is a form of nonsense. You might say, “I am in pain,” and that makes sense. You might even say, “I know I am in pain” (although there is much quibbling about whether the “I know” is necessary). But to say that the other person is in pain is to obviously overstate your ability to know things. But Cavell wants to tease out what we usually mean by this statement, “I know you are in pain.” Typically, it isn’t used as a statement of certainty, but as a response to an exhibiting of pain. In other words, to say, “I know you are in pain” is to express sympathy through a kind of acknowledgment (“I know what you’re going through”). The skeptic’s mistake is to misunderstand, or wrongly attribute, the know in the sentence—to think that the speaker is making a claim of knowledge. That is, the skeptic isn’t being very keen to how we actually speak and use words and mean things by using those words.

And then Cavell asks an interesting question: “But why is sympathy expressed this way?” Why, in other words, go to the epistemological arena for our choice of words? Why bring up the whole issue of knowing something? And if by saying that we know someone else’s pain, and our point isn’t that we are certain of something (as in, “I know the table has four legs,” or “I know the capital of California is Sacramento”), then what it is is that we’re saying that we know someone else is in pain? And this is where Cavell makes the distinction of the essay’s title—between knowing and acknowledging. When I say, “I know your pain,” what I’m doing is not knowing, in the strict epistemological sense, but acknowledging your pain.

What’s the difference? The difference is huge because suddenly there is a moral claim being thrust upon the scene, an element of choice that goes beyond the cold facts of knowing something (or not). Ignorance is now not a coincidence, nor is it to a sign of epistemological failure. As Cavell puts it, “If one says that this is a failure to acknowledge another’s suffering, surely this would not mean that we fail, in such cases, to know that he is suffering….The concept of acknowledgment is evidence equally by its failure as by its success….A ‘failure to know’ might just mean a piece of ignorance, an absence of something, a blank. A ‘failure to acknowledge’ is the presence of something, a confusion, an indifference, a callousness, an exhaustion, a coldness.”

Here, in this essay, and also in his thoughts on photography (where he argues, against the sway of common thinking, that a photography is not a representation—our representation—but a bit of reality that should humble us), Cavell seems to be suggesting that there is something useful in going down the epistemological road. Where does it lead? To two places, I think. First, it leads to a form of modesty and a warning against human hubris about what we might know, telling us of the limits of what we perceive about the reality that surrounds us and inhabits us. This is somewhat in keeping with Rorty’s conclusion, though there’s also a sort of smugness with Rorty (too much shrugging of the shoulders, which is perhaps a sign of a different sort of hubris). But there’s also, at the end of the road, an opening on an ethical territory—a sense that what we don’t know makes claims upon us, and in part exactly because we don’t know.

In other words, from an epistemological point of view that Cavell thinks is valuable, what separates me from you is immense and undeniable (no warm fuzziness here about how “we’re all in it together”); and yet that very distance between us—between what I know about you and what I don’t know—creates each and every one of my ethical dilemmas. To ignore you because I can’t know you (the skeptic’s position) is no longer a possibility. But similarly, I think, Rorty’s epistemological “blank” (to use Cavell’s term) no longer seems right in describing my situation vis-à-vis you, in terms of what moral claims you may or not bring to my attention.

Standard
John McGowan

Nothing Outside the Flux, Part Two

With all the preliminary discussions, I am now feeling that this post is going to seem anti-climactic.  So I will try to keep it brief.

Here are six things that have been considered “rights” at one time or another: 1. the right to a jury by one’s peers; 2. the right to practice the religion of one’s choice without any economic, political or legal penalty; 3. the right to vote; 4. the right to speak one’s opinion freely and to publish one’s thoughts without incurring any economic, political, or legal penalty; 5. the right to a paid vacation; and 6. the right to marry a person of the same sex.

What, if anything, do these rights have in common?  Arguing along Wittgensteinian lines, I would say that, at best, these rights may share a “family resemblance.”  They certainly have no obvious common essence.  If their unity were obvious, then (presumably) the people who advocated in 1300 for the right to a trial by one’s peers would have seen that their arguments for that right entailed (by logical extension) the right to vote and certain rights in relationship to employment conditions. But even the Levellers (in the 1640s) didn’t advocate for rights 5 and 6 on my list.

T. H. Marshall famously distinguished between political, legal, and social rights.  Legal rights generally seem to involve “equality before the law” as well as certain safeguards against the potential tyranny of the state.  Political rights are rights to participation, to having a voice–and hence are more connected to some kind of normative sense of democracy.  And social rights attend to well-being–and are thus connected to some kind of notion that each citizen (each human?) is entitled to a the basics of a decent, secure, sustained and sustainable life.

People who write about rights have talked about the “inflation” in rights in the 20th century, as the entitlements (to health care, to good schooling, to unemployment insurance, to disability insurance, to a pension) that Mitch Romney resented. Social rights (in Marshall’s sense) came, more and more, to be added (in the Western world, at least) to the traditional “liberal” rights that protected against the tyranny of the state by providing for individual liberties like the religious freedom, freedom of speech and association, and protection of private property.  But the term “inflation” here suggests that there is a core meaning to rights that is then devalued by extending our use of the term to include new things.  Such policing misunderstands how language operates.  Words do expand, even change drastically, in terms of meaning and coverage over time.  And the new meanings and new referents often have only a tenuous connection to the prior usages.

We can certainly point to historical factors that contribute to the shifting referents of the term “right.”  For social rights, we would look toward democracy and the pressure on popularly elected governments to provide basic sustenance as well as protection against abuse by employers.  For “human rights,” we would look toward to the abuses of their citizens by 20th century governments and the subsequent attempt to establish some sense of what constitutes a governmental “crime against humanity.”  That this effort adopted the vocabulary of “rights” is more a testimony to the political effectiveness of rights (hardly perfect, but not negligible either) than to any resemblance to the rights established (for example) in the American Bill of Rights.

Each of the various rights, I argue, should be examined, understood, and evaluated in terms of  a) the historical circumstances of its emergence: b) the harm it is meant to alleviate or the good it is meant to provide–and its success in performing that task; and c) the extent of its application (i.e. who is getting the benefit of that specific right).

The right to vote and the right to a paid vacation have nothing significant in common.  By “nothing significant,” I mean, nothing that is central to the evaluation of the legitimacy, desirability, or optimal functioning of the two different rights.  At the level of generality at which Hamacher writes, I find nothing that would influence my evaluation of whether the right to vote is a good thing or not.

I am hardly claiming that rights are sacred and cannot be questioned.  I am only saying that if you want to convince me that the right to vote is more harmful than beneficial, then talk about the right to vote.  Don’t talk about some general, transcendental conditions, that underwrite all rights–and think that an argument on that level has the power to persuade me to abandon a specific right like the right to vote.

Now, one might say that the dictionary offers a definition of “rights,” so the category must have some coherence, some common core.  But, if we follow Wittgenstein, the counter-argument is that 1) a dictionary tries to fix something that is moving; 2) that even a dictionary will list several different meanings for a term.  So, for example, “right” means a) correct; b) a direction (the opposite of left); c) a political orientation (conservative as opposed to liberal); and d) a right in the sense of a legitimate claim.  It would surely be impossible to find a common core to those four meanings–or even to designate one of those meanings as primary, with the other meanings as derivatives.

But what about just sticking to definition d) right as a legitimate claim.  A right, it is often said, is a claim that establishes an obligation that others satisfy that claim.  In modern polities, it is usually understood that the state is the ultimate guarantor of rights. (But then came the 20th century, with the state often the greatest violator of rights.) The right to a paid vacation is a claim upon one’s employer–but it is the state that enforces that claim.  And the state’s enforcement usually only follows the establishment of the claim through a law.  So we might think that here we have found the common element.  Except that many of the traditional “liberal” rights (freedom of speech and of religion, fro example) are rights against the state.  This group of rights identifies areas where the state is to have no power.  Of course, in some way, the state is asked to enforce the rights that limit its own power.  But the state can only accomplish that paradoxical feat through a separation of powers, through the establishment of an independent judiciary.  The complications here are formidable–and the possibility of failure high.  But–and here is the methodological point–this is not a logical or a necessary or a conceptual problem.  This is a practical problem.  There is a good (freedom of speech) and we are trying to establish the best way to insure that that good is provided to as many people as possible in as many different circumstances as possible.  Accomplishing this feat is difficult.  But that is not a difficulty that could be alleviated by getting the logic or the concept right.

I think intellectuals, especially leftist intellectuals, have an unfortunate tendency to believe that if you get the analysis right, if you get the principles aligned correctly, that you can solve a problem once and for all.  But politics is much more fluid than that.  Whatever system you set up, people will try to game it to their own advantage.  So you (as a participant in political struggles) have to continually be vigilant, continually be tweaking the system or changing it, to address the newest obstacles to its functioning to actually protect and provide the rights you hold dear.

Rights, then, name political desirables. (They are aspirational, a notion that has come up in these blog posts before.)  We can certainly argue (and do) about what is desirable; and we can certainly argue about (and do) the best ways to secure what we desire.  But my desire for ice cream tomorrow, and for fame as an author the next day are two separate things–and arguments about the desirability of those two things and the best way to attain them are also two different things.

One final point–about pluralism.  I think that pluralism is my bottom-line metaphysical commitment.  I have no killer argument that establishes, beyond a doubt, that pluralism is the way things are.  So I, in fallibilist fashion, hold onto the hypothesis of pluralism until experience disconfirms it.  Pluralism means: 1) as some sociologist once wittily said, “There is only one law in the human sciences: some people do, and some people don’t”; 2) that any general term like “rights” or “liberalism” should be viewed with suspicion, as failing to attend to the significant differences that characterize members of that category.  Wittgenstein offered “I will teach you differences” (from King Lear) as the epigraph to his Philosophical Investigations. 3) Crucially, that we are always in the land of trade-offs.  (This Is Isaiah Berlin’s pluralism.)  We want a variety of goods, and those goods compete with one another, and are not fully compatible.  What Hamacher seems to believe is that we should either get an account of rights that eliminates all conflicts and messy inconsistencies or that we should abandon the notion of rights altogether.  But Wittgenstein advises instead that we “go back to the rough ground,” that we give over our fantasies of smooth sailing, and deal with the difficulties of this world, instead of imagining a frictionless Platonic ideal.  You can see in this last sentence why I call pluralism a metaphysical claim–because it does say this world is messy and imperfect and that the realist should be attuned to that fact.

Standard
John McGowan

A Brief Response to Tyler–with an Invitation

A) Necessity

Tyler’s comments return to the necessity issue we also touched upon this summer in my posts on Piketty.  I sense that we have a deep disagreement here.  That disagreement hinges, in part, on the understanding of necessity.  So I’ll try to be clear about how I am using that term.  I take my usage directly from William James.  The necessary is that which would happen whether or not a human being does anything.  The necessary cannot be prevented by human action, or pushed in a different direction by human action.  That’s why I keep connecting the necessary to that which exceeds human control.  For James, the impossible is what will never happen no matter what I do; the necessary is that which will happen no matter what I do; and the possible is that which will only happen if I do (or someone else does) something.  Pragmatism is suspicious of every philosophical claim to have uncovered the necessary–and strives to bring more and more territory into the domain of the possible.

How this understanding of the necessary connects to the element of chance, of randomness, in Darwin is not clear to me.  Nothing in the Darwinian universe is inevitable; it could always be otherwise.  But it is not amenable to human intervention either.  Here’s where the invitation comes.  Help me here, Tyler.  1.  How do you read Darwin out of the “laws of physics” and the 2nd Law of Thermodynamics?  How specifically deterministic are those laws?  How do they connect to random mutations?   2. Do genetic modifications engineered by humans count as effective interventions in the process of evolution?  For that matter, what about prostheses (starting with eye glasses and heading toward various kinds of implants)?

In short, are humans by-standers to evolution or are they in fact right there in the mix, doing stuff that makes a difference?

Tyler’s answers to these questions will, I hope, make his understanding of necessity more clear.

B) Metaphysics

Tyler is of the “you either have a metaphysics or you are deluding yourself” school. I want to avoid metaphysical assertions or speculations as much as possible.  I do agree that it is not entirely possible to avoid all metaphysical assumptions.  But let’s go to Tyler’s objection to my invocation of Wittgenstein about our deployment of the word “human.”  I want to be agnostic about “natural kinds.”  Whether our terms “pick out” the ways in which nature sorts itself is something about which I do not have any strong opinion.  And I don’t see what difference it makes if I take the pragmatic approach of saying that these categories work for us (until they stop working)–and so we’ll go with them for the nonce. We dopn’t need to worry about whether they are congruent with the “really real.” On species specifically, I thought part of the point of Darwin is that species are not stable, that they are in flux, so that our names for species are snapshots that suggest a fixity that is not there.

I find myself these days pushed more and more to a kind of dualism I would rather resist, one between nature and culture.  In short, I can see some very good reasons to be a realist about natural things.  I am more comfortable being metaphysical about the Darwinian view (for example) than about cultural institutions.  What does that mean?  It means that I do think that, in some fashion, Darwin “gets it right.”  He is describing how the world is–or better (since the world is in motion) how it works.  And that description does include some fairly set in stone constraints.

But when it comes to “rights” or “art,” I am a pretty thorough going nominalist.  That’s going to be the burden of my next “flux” post.  There are no foundations or necessary conditions or determinative laws for “rights.”  Rights are just human contrivances, tools created for getting some things done.  And that’s why being metaphysical about rights is barking up the wrong tree.

Standard
John McGowan

The Hammer

We are quickly piling up tons to talk about.  But just a quick post to say that Daniel’s post on the hammer says more economically, more eloquently, and more persuasively one of the things I have been trying to articulate.  It is the season of giving, after all.  So I accept his gift with gratitude.

Standard
John McGowan

Nothing Outside the Flux, Part 1A

Before I get to parts 2 and 3 of this discussion of how to talk about human rights, I want to add something to my consideration of the metaphysical bias evident in the post-structuralist work that engages in a kind of “negative” Kantian exercise of transcendental critique.  Following Derrida, this version of post-structuralism likes to say that you are never being more metaphysical than when you are attempting to escape metaphysics.  So, instead, Derrida advises that we use (since it is inevitable) all the standard metaphysical concepts, but utilize them “under erasure,” always with quotation marks that signal (I guess) our ironic or skeptical relationship to those concepts.  This has always looked like having your cake and eating it too–and, more pointedly, it has never been clear what difference it makes in either theory or practice if we adopt this Derridean self-consciousness.  In any case, Hamacher follows this formula to the letter, writing an essay that demonstrates the unsustainable conceptual foundations of the concept of “rights,” but then employing that very concept when he lists the “rights” he wants to advocate as well as when he invokes “the right not to have rights.”

Since that basic move seems jejune to me, I am not very interested in pursuing its logic or its usefulness.  It’s a strategy I prefer to dismiss or ignore.  I hasten to add that I find other parts of Derrida wonderfully enlightening, so have no desire or need to dismiss post-structuralism as a totality (if it is a totality, which of course I would argue it is not).

Right now, however, I am more interested, in this addendum to my last post, to consider Nietzsche and William James.  It seems to me that metaphysics offers two consolations.  Or, to put it differently, there are two things that metaphysics gets you–and both of these things are relevant to the ongoing discussion of religion on this blog.

The first is necessity–an absolute bedrock foundation that establishes “the way things are.”  Freedom may be a good for which humans strive, but there seems a deep countervailing desire to identify the limits to freedom, to find the constraints that cannot be overcome.  There is something comforting, apparently, in saying: “Here is where you reach the limit.  Here is the reality or the law that cannot be gainsaid.  Here you, poor human being, must submit.”  I take it to be part and parcel of my atheism, with its built-in hubris, that I find the appeal of this submission to that which is greater than me and compels acknowledgment completely baffling.  If transcendence (the identification of something beyond the self) also entails submission to that something, then count me out.  I am interested in my voluntarily connecting to something beyond my self, and do believe that I find the most intense meaning and satisfaction through that connection, but I want to choose my connections for myself, and am also persnickety about the terms on which that connection is played out.  But there is plenty of evidence (a lot of it gathered in James’s Varieties of Religious Experience ) that many people feel differently on this score.  They find it consoling to hand over the burden of freedom to a higher or larger necessity.  Nietzsche hits this note, albeit in his own idiosyncratic way, when he talks of amor fati.  I can’t help finding that moment in Nietzsche’s work  masochistic–and, not surprisingly, find much religious belief similarly masochistic.

Necessity, then, is the place where metaphysics tells you that compulsion is found.  Here you must bow to the way things are.  Stop kicking against the pricks because it is futile.  Better to submit with good grace.  So my retort is: “look very carefully at the spot where any thinker locates necessity, posits the boundaries that cannot be breached. That’s where we should understand that thinker as having his greatest investment, his greatest fear.  That’s where he is terrified by the possibility of contingency, of unfixity.”  And pragmatism, as I understand it, is a philosophy of possibility.  It wants, insofar as possible, to replace necessity with possibility.  James is very explicit about this.  Everywhere we meet a constraint, we should experiment to see if there is way to overcome that constraint.  We should be extremely wary of every accepting a constraint or limit as absolute.  All kinds of things that were once deemed impossible are now possible.  We would be better off treating all constraints as temporary.  Maybe something is impossible now and not worth our continuing efforts to achieve.  But that something may yield to human efforts later on.  So, while I will almost inevitably acknowledge some constraints in my world and life, I should consider my identification of those necessities as “fallible” in precisely the way that Peircean fallibilism is recommended to the scientist.

So the first yield of metaphysics is the revelation of necessity–and within a certain psychological framework that revelation is reassuring.  The second yield–and one much more relevant to James than to Nietzsche–is a guarantee.  Strictly speaking, discovering necessity does not entail a guarantee.  One way of reading Greek tragedy (Nietzsche’s way, as well as Northrop Frye’s) is to say that it reveals a law that is inimical to human desires, but a law that is still consoling to the extent that it clearly defines a limit to human capabilities. Hubris is admirable (that’s why the tragic hero is a hero) because it shows the noble human unwillingness to accept an unjust, indifferent universe. But it is even more consoling to be shown that hubris must fail in its efforts, because there is a deeper fear of the chaos that would ensue if everything were possible, if humans experienced no limits to their efforts to act our their desires.

Once metaphysics gets yoked to theodicy (as it is in Aquinas, Leibniz, and Kant to name a few luminaries), then the revelation of necessity is connected to a guarantee that all will eventually come out for the best.  Suffering and evil will prove the means for the creation of good.  James is interesting because he rejects just about completely (with astonishingly few lapses) necessity.  But he hankers (famously) after the guarantee.  He wants to be assured that all of the human effort to create a better, more just world will not be in vain.  He needs God as the ultimate reality because God will ensure that the whole story of human history has a happy ending.  What James posits is a God who provides to humans “an unfinished world” as the stage on which they are to make their strenuous efforts to render things better.  So we have an absent God, but one whose existence guarantees, at the end of the day, that our strivings will not have been in vain.  Without that God, James thinks we will sink into a depression of total lethargy because we won’t see any point in trying to do anything.  James is fully aware that this God of his is a hypothesis, and (in fact) he personally finds it very difficult to believe in this hypothesis.  But he is equally convinced that belief in God (of his sort) is an absolute psychological necessity (at least for himself) if he is to ward off depression.

James is nowhere more a creature of the 19th century than in his conviction that life will lack all meaning, all purpose, if there is no God.  History must have a point, a direction.  Progress toward the better (James’s meliorism) must be assured.  The logic of this position–or maybe its feeling more than its logic–has always eluded me.  It seems self-evident to me that people find plenty of motivation, plenty of get up and go, in their daily affairs (getting and spending; caring for themselves and for loved ones; pursuing recognition for their achievements; playing games, and eating, and having sex) without worrying all that much about whether it all adds up to something, or (even more remotely) whether history is going to have a happy ending.  Nihilism may seem like a real possibility to the philosopher in his study, but it is awfully rare to encounter nihilism in the flesh.  People are desiring animals and they are striving to satisfy their desires all the time.  Yes, depression exists, but I doubt that it is very often philosophically induced, and I doubt even more strongly that it can be philosophically cured by a willful belief in a beneficent god.  But perhaps that’s just my lack of imagination about other people’s mental make-up.

In any case, if we think about the guarantee part of a lot of traditional metaphysics, we are back to the radicalness of Darwin.  In Darwin, history is directionless.  It is motored by sheer randomness–mutations that cannot be caused by human or animal action, that are completely disconnected from human or animal desire, and that are subject to equally random shifts in the environment.  The world is neither stable (but, instead, always in flux) nor headed anywhere in Darwin. No progress, no control, and non-human mechanisms.  No wonder Darwin scared the bejesus out of his contemporaries and still terrifies people today.  We could certainly say, at one level, that Darwin identifies a necessity.  Evolution will play itself out.  But the necessity in Darwin is one that is always in process, always in flux, so there is nothing substantive to be identified as the way things are.  There is just the process itself–and certainly nothing outside that process secures the issue of it.  Peirce, James, and Dewey absorbed a lot of Darwin.  It is just that Peirce and James were trying to wriggle out from under some of the more radical consequences of Darwinianism.  Both Peirce and James try, through their own peculiar versions of theism, to use God to guarantee progress of a sort that Darwin rules out.

However, in the pragmatist, mode of continual experimentation to find ways to overcome constraints, one response to Darwin (and the current version of Darwinian thought influenced by the genetic science Darwin did not possess) has been the refusal to accept that the mechanisms of evolution are beyond human control.  The first form this push-back took was eugenics, and that was on the table from almost the moment The Origin of the Species was published.  Today we call it genetic modification or gene therapy, but it is within the same ballpark as eugenics insofar as it represents an attempt to create new organisms (or modify existing organisms) in such a way that humans acquire more control over what the non-human (nature) provides.  The ethical dilemmas raised by the new genetic sciences are formidable–and, I would argue, continuous with the ethical issues surrounding humanism that our blog has been exploring.  All I will say here is 1) that genetic science is yet another example of how the lines of hard and fast necessity keep dissolving.  Yesterday’s constraints prove manipulable contingencies today when we discover ways of intervening in natural processes, ways that we did not possess yesterday.  Another reason to be wary of metaphysical claims about the ways things must be.  2) There will be no hard and fast, all encompassing answer to the ethics of genetic manipulation.  Decisions will have to be made instance by instance (part of my suspicion of totalizing thinking, a suspicion driving much of this three part response to Hamacher, and about which there will be much more said in part two).

Standard
Daniel Hayes

An Antidote to Head Scratching

I’m reading Rorty, and about Rorty, and I’m starting to get the lay of the land. I think I’m moving in the direction of understanding John’s way of thinking.

I’m thinking back to my ten principles of post-religious thought (in the “Man? Still” entry from a couple of weeks ago), and how I considered them a good start to explaining a secularist, atheistic point of view. I also presented them because I figured that John would be sympathetic. He would, more or less, agree with these principles.

If so, then how might John become, politically, a liberal humanist? (Is that a fair description?) In other words, we’ve been reading (Brown, Hamacher) attacks on both liberalism and humanism, and John has pretty much presented a defense of these points of view. And I end up scratching my head a little—wondering why, given what I assume is a thumbs-up on my tenets of post-religious thinking, he’s such a staunch advocate of humanism and liberalism (which seem based on a suspect metaphysics).

I understand now that it’s that “based on” that disturbs John. (Hence, my previous entry, “The Hammer.”) Who cares what political strategies or commitments are based on?

Simon Critchley, in a contribution to “Deconstruction and Pragmatism,” presents the following account, which I think might provide another antidote to head scratching:

“Most of the citizens of ‘the rich North Atlantic democracies,’ for reasons of either religious belief or a vague, residual attachment to the humanistic values of the Enlightenment, are liberal metaphysicians. Such people are genuinely concerned with social justice, and they believe that there is one, final moral vocabulary—Christian love, classical liberalism, liberties underwritten by tradition—for deciding political questions, a vocabulary in touch with our essential humanity, our nature. On the other hand, although clearly outnumbered by the metaphysicians, there are non-liberal ironists who are concerned with their self-realization, and perhaps the realization of a small group, but who have no concern for traditional liberal questions of social justice. [Rorty wants to] persuade liberal metaphysicians to become ironists…and non-liberal ironists to become liberals.”

Why this middle position, “liberal ironist”? In short, because Rorty believes in a distinction between the private and the public. It’s a distinction I don’t yet understand, though I’m starting to see its implications—the supposed dangers of misapplying an antimetaphysical viewpoint to the public realm. Apples and oranges, in short.

In any case, I’m starting to see why John is unsympathetic to the idea that metaphysical skepticism—about foundational ideas, whether relating to “God” or “Man”—might have something to do, one way or the other, with attitudes toward political ideas that make use of humanist categories or strategies. What is really suspect, according to this way of thinking, is the naïve idea that there might actually be a non-liberal, ironic political viewpoint. To believe this way, according to Rorty, is to not step into the political arena at all—that is, to mistakenly think metaphysical speculations (even antimetaphysical speculations) have anything to do with politics, notions of justice, etc. Non-liberal ironists are seen, instead, as only concerned with the solipsistic purity of their own thinking (or what Critchley refers to as their “self-realization”).

Standard
Daniel Hayes

The Hammer

First off, John has perfectly described my thinking about Darwin and this notion of “the human.” I do think I’m talking about a kind of evasion. (Cavell refers to this as “deflection.”) On the other, different point (his second paragraph), I’m still struggling a bit, though I’m certainly signed up for avoiding “neat solutions and clear imperatives.” I can see that John might be suspicious of the idea of “seeing things clearly” as some sort of ultimate goal. But on the other hand, what else are we trying to do? And isn’t this what Flanagan was trying to do—that is, sweep away some of the misconceptions and deal with what little we can be sure of?

I’m reading “Deconstruction and Pragmatism,” edited by Chantal Mouffe, in order to bone up on Rorty and such. (The book includes three responses by Rorty to intervening essays.) I’ve read a couple of Rorty’s books, and I remember some things, but not enough. One reason I could never be an academic is because I can’t remember what I read, and so there’s only a very weak form of accumulation. (Now is now, and then was then—a river that flows by.)

But in the meanwhile, I want to see if I can understand John’s point—his criticism of Hamacher, his notion of things “going all the way down.” (Where did this drilling metaphor come from? It’s very powerful, and used over and over. I’m partial to it, too, though its insistence sometimes suggests a worry.) And I realize that John is only a third of the way through his explanation of his misgivings with those who would trash humanism or subject it to a sort of high-minded tsk, tsk. But…

So let’s say we have a toolbox. It’s full of tools, and one of our favorites is The Hammer. The Hammer is very heavy and very shiny and it was handed down to us by none other than God. When we raise The Hammer, we know we are doing the work of God because we are using His instrument (or one of them). In short, this is foundational thinking of the highest order.

But let’s say years go by and we still have the same old toolbox, and we still have a hammer. It may resemble the one we used to have—that is, The Hammer—but we don’t consider this one special (beyond its usefulness), and we certainly don’t think of it any longer as a tool handed down by God. Oh, we remember how people used to wield the hammer that way, with that kind of confidence, but now we think of it as just an ordinary hammer—useful but not magical. In short, this is antifoundational thinking according to a progressive secularist narrative.

John’s point is that there isn’t anything inherently foundational about the hammer. It’s silly to think that we need to have discussions about the hammer itself, somehow attempting to establish whether or not it is a secularist tool, or whether or not it is somehow tainted by its past use (or even its present use) as The Hammer. To do so is—oddly, in a roundabout way—a foundationalist move, as though there was something in the hammer, a kind of essence of hammer, that needs to be metaphysically established, once and for all, before we use it.

As John might say, whether the hammer is coherent or not is beside the point. Those who claim that the hammer “cannot be legitimate unless we find a secure, coherent account of the very structure” of the hammer—these folks are barking up the wrong tree. John’s point, if I understand it, is that such conjecturing is a waste of time and energy (when we could be figuring out what the hammer is good for, how best to use it). And it also fails to take advantage of the secularist situation, which involves not having to think about foundations one way or the other.

Did I get that right?

Standard
John McGowan

Being Animals

A quick response to Daniel’s last post.  I think he makes it very clear that the great challenge posed by Darwin is: how do we, soi-disant human beings, come to terms with the fact that we are just animals.  Nothing more, nothing less.  (And the repetition of soi-disant is deliberate.)  To accept the Darwinian insistence that we are just animals is to see why Darwin is so profoundly secular, such a threat to religion.  Daniel’s argument is that the category “human” exists within a hierarchy, a great chain, that has to posit the divine and the animal in order to create a space in between those two for the “human” to occupy.  So: if we accept Darwin and we announce that we are secularists, then the category of the “human” collapses.  Or, at least, we should face up to the history, the genealogy, of that category, recognizing that the “human” gets rolled into position (and gains in prominence) precisely as a way to evade the full implications of Darwin and of the death of God.  Humanism is bad faith (to pun on Sartre’s famous stricture).

Another different, and maybe minor, point.  Daniel ends with a rhetorical flourish about seeing things as the ultimate moral responsibility.  It is exactly that kind of statement that my anti-metpahysics posts are designed to resist.  I want to say that there is “no ultimate moral responsibility.”  There are just multiple moral responsibilities, and none that is more ultimate, that always trumps the others.  Instead, our various responsibilities are called into play in difference circumstances, and they often conflict.  It is trade-offs all the time and all the way down.  Neat solutions and clear imperatives are more the exception than the rule.

Standard
Daniel Hayes

“Stop Treating People Like Animals”

I read John’s last entry and learned a few things—or felt a coherence in thinking that I hadn’t before. In other words, I found out why John is a teacher—and obviously a very good one. Adroitly, he described both Flanagan’s ideas and the overall situation where we find ourselves (thinking as secularists). It’s a very messy situation. I think John would agree with me about that. Or if not messy than difficult. (That is, it very much required John’s explication.) We are still trying to figure out how to proceed without God, without religion, and none of this is easy.

I think Flanagan’s approach aims in a direction—but then stops short. I’ll try to explain what I mean below.

Let’s begin with a naturalist way of thinking. The argument here is that there is no supernatural realm, and so we accept the fact that we are animals on earth, and then we need to get to the business of figuring out what kind of animals we are. Enter the latest science, updated phenomenological understandings, etc. As John explained Flanagan’s approach, what matters to human beings, minus the mumbo-jumbo, is consciousness and the way that it creates meaning. That meaning can be configured, or not configured, to allow happiness or what’s sometimes referred to as flourishing. It turns out that we seem to flourish when we have a purpose, when we are free to do as we wish, and when we are connected to other people.

Good enough. But taking it a step further, let’s say that flourishing is a good. (We’ve now moved into a philosophical and ethical realm.) And we might wish to promote the good. If given a choice (and what is morality, if not an argument over the proper response to a supposed choice?), we would want to remember that flourishing is what we humans most treasure. In other words, it’s not good to cause the opposite of flourishing (pain, isolation, constraint) if we have the choice. As Flanagan says, all of us want the opportunity to “express our talents, to find meaningful work, to create and live among beautiful things, and to live cooperatively in social environments where we trust each other.” And so Flanagan, describing the naturalist approach, seems to be describing a sort of struggle—not quite Darwinian, but close enough. We don’t quite flourish the way we wish, but we do the best we can.

To be truthful, I’m a little uncomfortable here—along methodological grounds. Doing the best we can is one thing; supposing that we might change things, jump out of our evolutionary selves and create more flourishing, is another. What more might we do? In a Darwinian sense, it would be like asking us to evolve more quickly. Not really possible. Or, in a very nonDarwinian sense (which is often packaged as “Darwinian”), it would be like asking us to struggle to become more and more like ourselves—as though evolution had stopped, reached its pinnacle, and now it was up to us to take over and finish the job! (A nice humanist vision, but not really naturalist at all.)

In short, the argument here about morality is very tricky. On the one hand, we are simply animals who traffic in meaning, who pursue happiness, and who shuffle the same 52 cards over and over. (The alternative view, about how we create our own lives as we live them, seems like warmed-over hubris.) On the other hand, morality isn’t something added on to our natural existence, like a cherry on top of a ice-cream sundae. We are naturally moralistic—we are empathic creatures who are not just selfishly pursuing own own happiness but, instead, we also find meaning in the happiness of others. And so what’s the (moral) problem? If people wish to flourish, and wish to see others flourish, is there is something that blocks a realization of this wish?

One way of thinking about this is suggested by Flanagan’s mention of Martha Nussbaum. Nussbaum is big on flourishing, but her work goes beyond human beings and involves other animals. (Nussbaum is part of the “animal welfare” wing of animal studies, and therefore quite conservative in her approach, but I’m going to skip over these distinctions.) For philosophers like Nussbaum, Flanagan has a big blind spot: he acts as if consciousness and meaning—and, by extension, flourishing—were simply and exclusively matters of human existence. Is a cow conscious? Does a cow’s world involve meaning? Surely it does. Whatever else you might say about a cow, it seems silly to suggest that a cow doesn’t wish to flourish, doesn’t prefer some situations over others—and that these preferences are a function of a consciousness. And so why do cows routinely get forgotten when philosophers like Flanagan write so movingly about human attempts to flourish?

I bring this up not to move into a discussion of animal studies, or the way we routinely cause violence to cows without a blink of the eye (i.e., without a philosophical moment of queasiness), but to suggest how the moral question might come into play for the naturalist naively intent on his humanistic prejudices. According to this way of thinking—yes, there is something blocking a realization of the empathic wish for others to flourish. That is, contrary to their best humanistic tendencies, humans play a game with themselves whereby they view only some humans as worthy of empathy. (I’m concentrating on empathy because it seems to be crucial to any naturalist idea about human flourishing, which is why the topic of altruism is so important in any such account.)

And so when any humanist thinks of the terrible things we do to each other, and the ways in which we abuse human dignity or fail to recognize the human rights of others, it’s always a matter of some humans being put in separate, derogatory categories. For example, American slavery requires a viewpoint that says that African-Americans are not fully human. Or in the practice of torture (the perfect antidote to Flanagan’s flourishing, since it involves constraint, pain, and isolation), those tortured are deemed less-than-human and therefore not worthy of moral consideration. According to this naturalist, humanist viewpoint, we treat others with human respect unless we don’t; and when we don’t, it’s either because we are less than human or they are less than human.

Oddly, it seems to me, people have always looked at these humans who are not quite human as animals. In short, they are beasts. This seems strange. Is it just a coincidence? Why would this—the animal—be the category assigned for those humans who we wish to deny moral consideration? Why do we need that category? The humanist simply shrugs his shoulders, and says, “Yeah, well, that’s the problem: we need to stop treating people like animals.” Might it be that this very habit of thinking of humans as possibly animals is based on a philosophical proposition that has a long religious history? Might the very thing that humanists so want—to treat humans as humans (for whatever reason, whether because of “natural law” or naturalist sympathies)—be a symptom of a larger problem that needs to be addressed and that runs contrary to humanist thinking? Doesn’t it seem, upon reflection, that working so much to isolate humans from both the divine and the animal isn’t so much a secularist move but a simple reformulation of the Chain of Being? And isn’t that…embarrassing?

Part of what disturbs me here is the lack of any historical approach to things. My question above—Why do we need that category?—is a historical question, a genealogical question. It requires us to do the difficult business of inquiring into the full extent of a religious vision of things—where it came from, what it looked like, why it even involves a category so seemingly superfluous (animals). This type of secularist approach isn’t about being good, doing good, being more humane in response to God’s disappearance, but in seeing things clearly, of honoring the idea that seeing things is the ultimate moral responsibility.

Standard
John McGowan

Nothing Outside the Flux, Part One

I think this explanation of my basic objections to the kind of philosophizing done by Hamacher (and, to a lesser extent, Wendy Brown) is going to come in three sections.  So this is part one.

One of the more annoying ticks in Richard Rorty is his propensity to say “pragmatists believe” and then go on to state a position.  I think that just about everything I cam going to say in these three posts can be derived from Rorty and, thus, qualifies as some kind of version of pragmatism.  But I am going to use the first person instead of claiming to speak in the name of pragmatism, and I am not going to make the connections to Rorty or to other pragmatists explicit except in a few instances.

OK.  Enough for preliminaries.  The topic for today is metaphysics.  Traditionally, philosophy has sought to find the hard, permanent, unchanging stuff that undergirds the contingent events and entities humans encounter in the world.  For Plato, that meant a realm of ideal forms that existed somewhere beyond this mutable world.  For Aristotle, it mean identifying the “forms” and “substances” that were the solid stuff upon which natural changes were rung.  Aristotle wanted to remain in this world, but still aspired to identify what in this world was immune to change, was always invariably true.  When we get to “modern” philosophy, which starts with Descartes, the metaphysical effort is to provide a metaphysical account that provides the solid foundation for the discoveries of modern science, especially physics.  Modern philosophy (in Descartes, Spinoza, and Kant especially, but also in the various versions of empiricism right up to the logical positivists) understood its task to be swooping in to give a general account of the basic stuff of the universe, an account that explained why modern science was true (in Descartes’ and Spinoza’s cases) or was possible (in Kant’s case).  What philosophy could not accept (and this is what made Hume such a threat) was that there was nothing at all “underneath” science or underneath the phenomenon that make up the mundane.  No deep structure, no eternal framework, just things as they are experienced and as they appear.  It is no accident that Hume was both an atheist and a skeptic.  For Hume, there was no reason things were one way rather than another–and no guarantee that they way things were today would be the way they would be tomorrow.  It was all contingency, all flux, all the way down.

When we get to Nietzsche and the pragmatists, we get philosophers who try to abandon the metaphysical game altogether.  For William James, “nothing outside the flux secures the issue of it.”  His “radical empiricism” claims that the flux and our experience of it is all there is.  (Of course, James does not practice this radicalism consistently.  I’ll get to that issue in a minute.)  To end metaphysics would be to say, among other things, that it is presumptuous of philosophy to think that science somehow needs philosophy in order to be true–or for philosophy to think that it is the “queen of the sciences” (Nietzsche’s phrase) because it provides the solid grounding for all knowledge.  Instead, each science (both natural and human sciences) stands on its own, with its own canons of evidence and reasonableness, and its own relative success in proving persuasive in its claims and explanations.  Philosophy has nothing to add–and is certainly not capable of serving as the final arbiter in the inevitable disputes that arise within specific intellectual enterprises (or, though I hate the word because of the institutional forms it has taken, disciplines).

Following Nietzsche and the pragmatists, I aspire to what I call metaphysical parsimony.  I want, in my own thinking, to make as few metaphysical assertions as possible, and I want to attend to various issues in terms of their specifics and the way those issues play out as they are acted upon and as they act in specific contexts.  This means breaking down the propensity to generalize as much as possible; it means giving up once and for all what Dewey called the misguided “quest for certainty”; and it means trying to elevate contingency over necessity at every turn.

In particular, I want to combat what I have called “transcendental blackmail.”  Since Kant, one form of metaphysics has been “transcendental” in the Kantian sense.  The goal is to uncover the “necessary conditions” for a knowledge claim or a concept.  Hamacher is practicing Kantianism of this sort.  He thinks he can tell us what is the deep structure, the underlying enabling conditions, of the concept of “rights.”  And then, once he has revealed this deep structure, he will show us that the concept of “rights’ is deeply incoherent and, thus, if not unsustainable, then (at least) operating under severe strain.  We can call this negative Kantianism.  Kant wanted to provide a way to secure our basic concepts as fundamentally coherent. Hamacher, using the same Kantian transcendental logic, will show us our concepts are incoherent.

But I want to refuse that whole way of thinking.  As a pragmatist, I want to convince you that a “concept” does not have some kind of internal structure that determines the form it takes–or whether it is coherent or incoherent.  I want to say that “coherence” is besides the point.  A concept, like a thing, should be viewed as the famous pragmatic maxim advises us to view it: in relation to its effects.  If the concept in this specific situation proves effective in relation to something I am committed to achieving, then its general “coherence” is irrelevant.  Pushed further, I would argue that the very notion of “general coherence” is nonsensical.  A thing or concept only makes sense in a context.  To give an example, my “right to vote” is meaningful in 2014 America; it is meaningless in 1300 America. Rights, like most anything else, are historical and contingent.  Their meanings change over time, and their ability to achieve what they are deployed to achieve varies widely from context to context.  There is almost nothing useful that can be said about rights generally and abstractly apart from their specific articulation in specific contexts.  In short, philosophy at the level of generalization and abstraction at which Hamacher operates has nothing to offer in determining the legitimacy and effectiveness of rights.  That’s what it means to refuse “transcendent blackmail.” It means to refuse philosophy’s pretension to legislate about various claims’ legitimacy and coherence on the basis of an uncovering of their fundamental enabling conditions (or structure).  At its most radical, this refusal of the Kantian move claims that there are only appearances, only experiences, only the flux.  There is nothing underneath those appearances.

Now, of course, this is where the metaphysician says “I’ve got you.”  The pragmatist is making a metaphysical claim when he says it is all appearances, it is all flux, there are no foundations.  Metaphysics is inescapable.  Perhaps that is true, although Barbara Herrnstein Smith has some very good arguments against this self-refutation argument.  But I prefer the Rorty route myself.  For Rorty, we now have had a century of debate between the metaphysicians and the philosophers who want to end metaphysics.  As a result, all the moves and counter-moves in this debate are well mapped out.  Neither side seems able to convince the other–and so it all begins to seem a rather sterile affair internal to philosophy.  In the meantime, there are real issues about (for example) human rights, real problems (including the one we have already discussed a bit in these posts: who gets to have rights?).  So I, for one, choose not to spend my time fighting against the people who say rights cannot be legitimate unless we find a secure, coherent account of the very structure of rights.  Instead, I want to spend the majority of my time addressing the question of rights on an entirely different footing, one that doesn’t worry about foundations and conceptual coherence.

To Rorty, that means changing the topic, refusing to believe that I must argue metaphysically in order to be truly philosophical on the topic of rights.  It’s another form of “transcendental blackmail” to say I must attend to foundations in order to be thinking truly consequentially on a topic.  Rorty’s gambit is that he will make more progress if he dismisses the transcendental arguments as irrelevant to his concerns and proceeds to argue on an entirely different basis.  There is nothing to be gained at this point by engaging Hamacher on his chosen terrain. To do so is just to concede that the metaphysician has identified correctly the solid ground on which all arguments must stand.  But I want to provide an argument that proceeds in an entirely different fashion.  And that’s what I will try to do in Part Two.

Standard
Uncategorized

Thoughts on Humanism, with a Dash of Flanagan

My to-do list is piling up since I still (I promise!) will write a long quasi-essay on Hamacher and what I find objectionable in the line of argument he pursues.  But that’s for another day.

Right now, I want to work with Daniel’s rich post on humanism and atheism, by way of thinking about Flanagan.

Flanagan is a naturalist, which he takes to mean that there is nothing supernatural in the universe.  We live in a world of natural causes and processes.  He insists that one of those natural processes is consciousness.  Humans have the experience of “being conscious” as a result of biochemical processes that are still opaque to us.  But he takes it as axiomatic that conscious states can be mapped to biochemical reactions.  But he wants to avoid any kind of reductionism.  Water “feels” wet, it flows, it puts out fires, and is good for you when thirsty.  All of those things about water are real, even as we know that water is H2O.  Water (our phenomenal experience of it) doesn’t reduce to its molecular structure.  But its molecular structure is there alongside all of its “qualities”–and is causally related to those qualities.  So that’s his naturalism.

The next step is for him to say 1) the fact of consciousness is not, therefore, such a big problem.  All kinds of natural things have multiple ways of appearing.  But 2) the really hard problem is how things have meaning.  There is nothing inherently meaningful about water, about its wetness, or about the molecular structure that gives water its qualities.  Yet, that things be “meaningful,” that they somehow add up to something, or at least elicit our desires and our allegiance, seems crucial to any possibility of human happiness (or, if you prefer, flourishing).

So: where does meaning come from?  The atheist (and the naturalist of the Flanagan variety) says: not from god.  And the naturalist, it seems, also doesn’t believe that natural processes carry meaning.  Earthquakes are not meaningful in and of themselves.  Enter Feuerbach.  The source of meaning must be humans themselves.  But that, again and again, appears too flimsy a foundation on which to build.  Humans, as Nietzsche delighted in reminding us, are especially prone to lying to themselves, so why should we believe the stories that humans contrive about the meaning of things?  If we take the Feuerbach route, than every account of meaning looks exactly like religion: a human projection onto brute reality.  The stories can have very different content, but they all have the same form.  Humans try to displace the origin (or source0 of meaning from themselves onto something that exists over and against humans, and thus has some kind of solidity, some kind of reality.  The terms that serve the same basic function as god proliferate: reality, nature, being, logos, the noumena, natural laws, human nature.  Humans keep searching for something solid, something that is not subject to change, destruction, and uncertainty.  Something that they have not made.

Now a certain group of philosophers–of whom Dewey and Nietzsche are early examples and Rorty a more recent one–have told us that we should give up (in Dewey’s words) “the quest for certainty.”  Instead, we should go all the way back to Heraclitus and accept that the “flow” is all there is.  It is just contingency all the way down.  There is nothing solid.

One response (a response Nietzsche endorses part of the time) to trying to give up all god-terms (any solid foundation) is to go the Promethean route.  If all meaning and purpose are just human creations/projections, then you should try to impose your will on the world and on others.  Because such efforts will always produce conflict (the world and others will resist your efforts), the Promethean route does discredit solipsism.  If meaning is a human creation, it is not a personal, subjective creation.  The self occupies the world alongside others and with non-human players, so either it’s all conflict all the time, or there are periods of relative stability, of negotiated settlements that inter-subjectively arrive at some established guidelines and maybe even some agreed-upon meanings.  For twentieth century philosophers after the “linguistic turn,” language seemed a prototypical instance of inter-subjectively, communally produced meaning.  But these philosophers, following Wittgentstein, stressed that language never stands still, that it is constantly evolving, and that such evolution is never directive or purposive in the sense of someone controlling where it is going or with any guarantee that changes are improvements.  Flanagan accepts this non-directional understanding of constant change because he is, as he keeps saying, a neo-Darwinian.  He thinks evolution (purposeless random change) is pretty much how things go.

But Flanagan, like Dewey, doesn’t want to go the Promethean route.  So he has to find a way to introduce meaning while accepting that the natural processes of evolution and biochemistry are meaningless.  And here is where Daniel’s post nails it.  Flanagan addresses this problem by making a core claim: The human animal pursues happiness. It follows that the primary question for humans is: what brings happiness?  How can I flourish?

And then Flanagan says that the best evidence suggests that humans flourish when each self connects itself to something greater than the self.  Here’s where transcendence enters.  Transcendence is the name for the experience that something exists that is greater than me–and, here’s the key move, that something greater merits my allegiance.

Flanagan’s candidate for that something greater is humanity.  Humanism (as Daniel argues) is the name for the philosophy that places humanity in the place of transcendence.

Now, why should humanity earn my allegiance? Flanagan’s reasoning is not totally clear.  There are two possibilities: 1) it just turns out that attending to the needs and desires of others, minimalizing their suffering and enhancing their flourishing, is the surest road to my own personal happiness.  All the other contenders as roads to happiness just aren’t (going by the empirical evidence) as trustworthy.  Or 2) there is some kind of evolutionary build-in that makes individual humans solicitous of the survival of the species.  Lots of Darwinians hold this view, that basically our most primal drive is to pass on our genes to insure that humans continue to walk the earth.  Flanagan, not surprisingly, treads very lightly here, because just passing on your genes seems rather remotely connected to the notion of flourishing.  He wants more than survival of the species; he wants happiness.

In sum, Flanagan believes we need transcendence, some thing greater than ourselves, to be the focus of our efforts, our loyalties, and our desires in order to have a fully flourishing life.  And he thinks the best candidate for this something greater, is humanity itself–although his Buddhism would suggest that he would not be hostile to the idea that assuming some kind of responsibility for the world at large (humans and non-humans alike) would also serve the basic purpose.

Dewey and Rorty are completely straight-forward humanists.  Rorty thinks there are two paths to happiness (both of which should be pursued): a public path that can be boiled down to the effort to reduce human suffering and a private path that is about developing one’s capabilities and talents to their utmost.  Liberalism for Rorty is about reducing suffering first of all and then about securing the means and institutional structures that allow people to pursue their own pathways to happiness.

Should it bother the humanist that humanity comes to occupy the place of transcendence?  Does atheism require that the place of transcendence stand empty?  And what would a worldview that eschewed any form of transcendence look like?  What kind of life would it entail, what kind of image of what it meant to flourish?  Is it possible to imagine a life that didn’t rely on some notion of purpose, some sense of the meaningfulness of trying to achieve certain things?

I take all of those questions seriously.  That’s why, from the start of this conversation, I’ve thought that transcendence was one of the very hard things to think about.  Even if the atheist insists that the things to which he is loyal are contingent, ever-changing, uncertain, there is still the need (desire?) to justify the loyalty–and, more than likely, advocate that others share that loyalty.  And that advocacy, even if it only takes the form of explaining for oneself why I have the loyalties I do, seems to introduce a god-term.

Daniel’s posts, it seems to me, 1) point to this re-appearance of god-terms in even the atheist’s discourse.  I am not all that troubled by this fact.  I think it overestimates the effect of a formal similarity between religion and humanism (i.e. that they share the form of positing something transcendent) and underestimates the differences that come from the very different content of what occupies the place of transcendence (something natural instead of something supernatural, the human in place of the divine).  But I do see that this form of thinking seems ubiquitous, and would like to find a way to escape it altogether.   And Daniel has shown that such an escape is astoundingly difficult to conceptualize.

What I take to be Daniel’s second point is much more troubling to me.  That is the argument that once a source of allegiance is posited, we can expect something resembling religious wars to break out once more.  There will be penalties exacted on those who don’t share that allegiance.  Most pointedly, humanism, even if it manages to extend its concerns to alleviate suffering and enable flourishing, to all humans seems to entail a lack of concern for the suffering and flourishing of non-human animals.  Since I have no good answer to that issue, the open question (the one Daniel’s posts push us toward) is whether there is a non-humanist atheism that, by avoiding appeals to a transcendence to which selves are to be oriented as they strive for meaningful lives, provides a way to inhabit this earth and live our lives more ethically than those dependent on transcendence have managed to do.

Standard