Read Adam Kirsch’s review of Robert Alter’s translation of The Wisdom Books (Job, Proverbs, and Kohelet).  That they serve as a kind of counterpoint to the world’s narrative as established elsewhere in the Bible is not a new observation, but one that Kirsch does well to point out — he points it out in the context of language — that of Alter’s translation and that of the literary merit of the texts themselves.  In terms of surface value, a line-by-line basic reading, they perhaps are contradictory.  They are even so in their broader sense — yet they are (and, more importantly, were, millenia ago) accepted as revelatory, just as were (say) the Psalms or the Torah itself.

All of these are, of course, different kinds of revelation: Torah as, in a strict sense, a legal-historical revelation; Psalms as a personal-poetic; then the wisdom books, discussing something more akin to man-in-general, but still a sort of personal-poetic revelation.  The question, however, still remains: can you have revelation that contradicts itself?  Does this revelation, as it may initially appear, contradict itself?  And then what — especially for one who, like me, is inclined to believe that truth is inherent and inhering in these books?

The seeming contradiction and its acceptance by the earlier (now ancient?) generations and its leaders points toward how we can and should (and must, at times?) read the Bible: not as a singular unswerving narrative, but as a mixture of voices, all trying to understand man and God from their position; all of which, the tradition holds, experienced revelation of some kind.  The literary truth of Job or Kohelet may seem to contradict the narrative truth of the Creation, as Kirsch notes of Alter’s translations: but this does not mean that neither is true, or that we must choose one or none.

Biblical truth is closely related to the truth we find in art and literature.  It is various and multiform but exists nevertheless.  What strengthens this truth to something beyond that which is found in art or literature is the idea of revealed truth, if one accepts it.  But the composition of the Bible — that it seems to contradict itself; that it leaves gaps and jagged edges; that the truths of the various books shout, at times, against each other and then with their opponent of a moment ago — is a kind of instruction from those who lived before us — from, in essence, the founders of the religion as a religion of a book — as to how we, millenia later, should read it.  Let the gaps and rough edges stand and try to understand them, and the whole, as they are, rather than try to sand them over into some immaculate unified whole that in the end becomes wholly uninteresting.

After all — even though Kohelet’s men and beasts are equals, unlike the man who is given dominion over beasts in Genesis, the “mere breath” that is all is merely the air which the same earlier book claims God breathed into Adam’s nostrils.  In the end, perhaps, the more interesting truths are those found in the gaps.

I propose that the core assumption that remains unchallenged and unquestioned through all the variations within the diverse traditions of “modern” thought is that the experience and testimony of the individual mind is to be explained away, excluded from consideration when any rational account is made of the nature of human being and of being altogether.  In its place we have the grand projects of generalization, solemn efforts to tell our species what we are and what we are not, that were early salients of modern thought. – “On Human Nature,” p. 22

…the beauty and strangeness of the individual soul, that is, of the world as perceived in the course of a human life, of the mind as it exists in time. – “The Strange History of Altruism,” p. 35

The accuracy of Robinson’s claims about modern thought in these short excerpts is less important than what it tells us about her purposes in Absence of Mind—and, perhaps more importantly, in her fiction (especially the recent novels Gilead and Home).  The collected product of the subjective experience and life of the individual human mind, she writes elsewhere, is culture, literature, and art.  Elsewhere, she indicates that the exploration of the subjective experience is the purpose, and, successful, the highest calling, of the novel.  Speaking of Paul Harding’s novel, Tinkers, she writes, “It confers on the reader the best privilege fiction can afford, the illusion of ghostly proximity to other human souls.”

What does this have to do with her own fiction, particularly the joint project of Gilead and Home?  To begin, let’s explore what makes them a “joint project.”  They explore lives in the same time and place, of people who know each other, shifting the center of gravity slightly, creating different novels.  On this level, there is a similarity between her Gilead, Iowa, and Faulkner’s Yoknapatawpha, Berry’s Port William, or even Joyce’s Dublin.  Yet there’s something different: unlike Faulkner or Berry, she isn’t very concerned with the town/county as such: it’s just the place, not a character.  And unlike Joyce, there’s more than just the busy beehive weaving of human interactions bringing the stories of the various characters into novelistic collision.

Robinson, in Gilead, offers what is, quite literally “the testimony of the individual mind”: John Ames’ long letter his young son.  It is supposed to testify on his behalf, and on behalf of his life, when he is no longer alive to do so.  Home offers the testimony of the same time and place through another mind, that of Glory Boughton.  From both perspectives, there are explorations of Ames’ wife, the Rev. Boughton, and—most importantly—Jack Boughton.  Either book, alone, is successful in offering that “testimony of the individual mind,” that “illusion of ghostly proximity to other human souls,” but what they do together—their joint project—is something greater than either could offer in solitude.

That is, they explore the subjectivity of that testimony.  The clearest example, perhaps is a conversation on Boughton’s porch—if for no other reason than all the central characters are present, and it is a scene in both novels.  From the beginning, we see what might be expected: minor differences in diction, remembering things from slightly different angles, telling the story to different audiences.  Yet something about Ames’ version strikes the reader as a more relaxed conversation, a bit of jousting on the porch on a pleasant evening; as depicted in Home, it seems somehow more a more earnest inquiry.

And at the end of the conversations, we see:

But your mother spoke up, which surprised us all.  She said, “What about being saved?”  She said, “If you can’t change, there don’t seem much purpose in it.”  She blushed.  “That’s not what I meant.”

“You’ve made an excellent point, dear,” Boughton said.  “I worried a long time about how the mystery of predestination could be reconciled with the mystery of salvation.  I remember thinking about that a great deal.

“No conclusions?” Jack asked.

“None that I can remember.”  Then he said, “To conclude is not in the nature of the enterprise.”

“Jack smiled at your mother as if he was looking for an ally, someone to share his frustration, but she just at very still and studied her hands.

“I should think,” he said, “that the question Mrs. Ames has raised is one you gentlemen would approach with great seriousness.  I know you have attended tent meeting only as interested observers, but— Excuse me.  I don’t believe anyone else wants to pursue this, so I’ll let it go.”

Your mother said, “I’m interested.”

Old Boughton, who was getting a little ruffled, said, “I hope the Presbyterian Church is as good a place as any to learn the blessed truths of the faith, including redemption and salvation first of all.  The Lord knows I have labored to make it so.”

“Pardon me, Father,” Jack said.  “I’ll go find Glory.  She’ll tell me how to make myself useful.  You always said that was the best way to keep out of trouble.”

“No, stay,” your mother said.  And he did.

There was an uneasy silence, so I remarked that he might find Karl Barth a help, just for the sake of conversation.

He said, “Is that what you do when some tormented soul arrives on your doorstep at midnight?  Recommend Karl Barth?”

I said, “It depends on the case,” which it does.  I have found Barth’s work to be full of comfort, as I believe I have told you elsewhere.  But in fact, I don’t recall ever recommending him to any tormented soul except my own.  That is what I mean about being put in a false position.

Your mother said, “A person can change.  Everything can change.”  Still never looking at him.

He said, “Thanks.  That’s all I wanted to know.”

So that was the end of the conversation.  We went home to supper.

Gilead, pp. 152-3

And:

Lila said, “What about being saved?”  She spoke softly and blushed deeply, looking at the hands that lay folded in her lap, but she continued.  “If you can’t change, there don’t seem much point in it.  That’s not really what I meant.”

Jack smiled.  “Of course I myself have attended tent meetings only as an interested observer.  I would not have wanted to find my salvation along some muddy riverbank in the middle of the night.  Half the crowd there to pick each other’s pocket, or to sell each other hot dogs—”

Lila said, “—Caramel corn—”

He laughed.  “—Cotton candy.  And everybody singing off key—”  They both laughed.

“—to some old accordion or something—” she said, never looking up.

“And all of them coming to Jesus.  Except myself, of course.”  Then he said, “Amazing how the world never seems any better for it all.  If I am any judge.”

“Mrs. Ames has made an excellent point,” Boughton said, his voice statesmanlike.  He sensed a wistfulness in Ames as often as he was reminded of all the unknowable life his wife had lived and would live without him.  “Yes, I worried a long time about how the mystery of predestination could be reconciled with the mystery of salvation.”

“No conclusions?”

“None that I can recall just now.”  He said, “It seems as though the conclusions are never as interesting as the questions.  I mean, they’re not what you remember.”  He closed his eyes.

Jack finally looked up at Glory, reading her look and finding in it, apparently, anxiety or irritation, because he said, “I’m sorry.  I think I have gone on with this too long.  I’ll let it go.”

Lila said, never looking up from her hands, “I’m interested.”

Jack smiled at her.  “That’s kind of you, Mrs. Ames.  But I think Glory wants to put me to work.  My father has always said the best way for me to keep out of trouble would be to make myself useful.”

“Just stay for a minute,” she said, and Jack sat back in his chair, and watched her, as they all did, because she seemed to be mustering herself.  Then she looked up at him and said, “A person can change.  Everything can change.”

Ames took off his glasses and rubbed his eyes.  He felt a sort of wonder for this wife of his, in so many ways so unknown to him, and he could be suddenly moved by some glimpse he had never had before of the days of her youth or her loneliness, or of the thoughts of her soul.

Jack said, very gently, “Why thank you, Mrs. Ames.  That’s all I wanted to know.”

Home, pp. 226-8

Jack and Mrs. Ames are both revealed to be different than Ames’ version of the conversation might make it seem.  It is possible that Ames is suppressing that moment—he is afraid of Jack and his possible influence on his wife and son.  Then again, how are we to know that the narration has not simply lingered too close to the irritated Glory, and that her mind associates the strange, slightly bumpkin Mrs. Ames with such carnivals.  And the Barth?  Ames is old after all—maybe he thought he recommended Barth.

It is likely that the version of the conversation given in Home is closer to the “empirical” truth than that in Gilead, if for no other reason than Ames’ version is significantly shorter: around four pages compared to the 8.5 in Home.  And while what remains in Gilead is a chance for Ames to give some of his thoughts on predestination—and show the grace of his wife—to his son, what occurs in Home is a more tense conversation, centering on Jack’s worry that he is, in fact, destined to be evil, to be a sinner.

The accuracy of the individual testimony is clearly limited—it is, as Robinson admits, subjective.  But that much is not her entire point.  That scene—a pivotal one, certainly—is quite different in both novels: but is either novel any less true for it?  Ames remembers events one way; Glory sees them another; perhaps one or both are actively suppressing or inventing.  But if the latter is the case, they aren’t merely deceiving the reader—they’re deceiving themselves, also; the deception, the would-be-“lie”, becomes a part of their testimony.

The joint project of Home and Gilead is to explore that subjective testimony of the individual life, to highlight the subjectivity of it by juxtaposing each novel with the other, but then to refuse to dismiss or condemn that testimony as flawed or limited.  Robinson celebrates the limitations and subjectivity, because they bring us closer to the reality of the human soul.  It is, in a way, a rejoinder to the idea of a narrator so fallible that the novel cannot even be trusted on the terms it sets forth—it doesn’t matter if nothing in these novels happened as it is narrated: they are not explorations of history, but a celebration of “the beauty and strangeness of the individual soul, that is, of the world as perceived in the course of a human life, of the mind as it exists in time.”

So when I came back to this blogging thing I told myself I was going to try to talk about things like literature, culture, and society, and instead I’m rattling off consecutive posts about war and language and torture.  (Which is basically the same thing that happened when I first began.)  But Drezner got all thought-provoking and wants to hear what twenty-somethings think about intervention:

“As I think about it, here are the Millennials’ foundational foreign policy experiences:

1)  An early childhood of peace and prosperity — a.k.a., the Nineties;

2)  The September 11th attacks;

3)  Two Very Long Wars in Afghanistan and Iraq;

4)  One Financial Panic/Great Recession;

5)  The ascent of China under the shadow of U.S. hegemony.

From these experiences, I would have to conclude that this generation should be anti-interventionist to the point of isolationism.”

Since I qualify, and since I think this might help make further sense of why I’ve spent the last 48 hours complaining about the rhetoric of a WSJ op-ed and Commentary blog post, I’ll take a stab at it before moving on to talk about “culture” (whatever that is).

First, though, I have to take issue with the sequence/narrative Drezner is offering.  It’s not quite so simple as (1) interrupted by (2) resulting in (3) followed by/resulting in (take your pick, I suppose) (4) and (5).  (China, in fact, I don’t think is a major consideration for many people my age—at most, it is a subsidiary of economic concerns, distant thoughts about debt and what things will be like when we are our parents’ age.)

(1) “An early childhood of peace and prosperity – a.k.a., the Nineties” did not exist in quite this formulation.  Yes, there was something of “peace and prosperity,” but it wasn’t outright peace, and it wasn’t outright war.  One of my earliest memories if of Peter Jennings announcing, I believe, the end of the Gulf War as we were sitting at the dinner table.  (I asked why we were fighting, and my father told me it was because the bad guys had gone into Kuwait to steal their money and food.  I was three.)  But it was a childhood not of “peace and prosperity,” but of prosperity and more or less successful humanitarian intervention.  I knew the Gulf War, and the Balkans, and saw Clinton take an active role in the Israel-Palestine peace process—and, in the way it was seen by those around me, come within a half-inch of success.  (I didn’t know from Somalia until Black Hawk Down was released.)

(2) “The September 11th attack” – yes, this broke, dispelled, shattered, the relative (albeit semi-militarized) calm of (1).

(3) “Two Very Long Wars in Afghanistan and Iraq”

How are we to understand this not as a misadventure in itself, but in relation to those early foreign policy lessons from (1)?  I opposed the Iraq War, but was a believer in what, for better or worse, we’ll call the Clinton foreign policy.  Iraq and Afghanistan undermined two important premises of both the Clinton and Bush-43 foreign policies: that “winning” can be easily measured, and that the populace of the intervened country want us there, and to win.  (The latter has been shown to not necessarily be the case—certainly some, perhaps many, do want us in Iraq and Afghanistan, but a vocal and sometimes violent segment do not.  How many times did we hear about this in the Balkans?)

But there was a third premise underlying the limited intervention of the 1990s: the feeling of an obligation to intervene, and to win—because, in short, we were The Good Guys.  Iraq and Afghanistan can’t disprove a sense of moral obligation to do something—even if they can indicate that full-on invasion is not the answer.  Let’s go back, briefly, to 2003, even with the hindsight of 2010: does an opposition to invading Iraq also require that one believe we should abandon enforcing no-fly zones, or risk incoherence?  I don’t think we can say this is a clear yes.

And this, perhaps, might explain the number of commenters on Drezner’s post saying that the lesson is not to be anti-intervention, but to be in favor of “smart intervention”—which I take to be something like the Clinton policy, perhaps more cautious.  It might also explain the number of my friends who have adamantly opposed wars in Iraq and Afghanistan, yet have expressed a desire to see American military intervention of some sort in response to Darfur.  The key words in this scenario are generally “air support” without any commitment of American ground troops—that’s what the UN is for.  I’ll admit—at one point, thinking of how we essentially deposed Milosevic using only the Air Force, this was my line of thought.

But why is it no longer my line of thought?  It is not related especially to (4) The Great Recession, because even if you view this as a result of Iraq/Afghanistan/our broader foreign policy, I don’t think it necessitates that you oppose “smart intervention.” Seeing the entanglement of our economic/fiscal future with long-term, large-scale occupations, I think precisely that is what people will tend to oppose—perhaps a wider range of intervention, but judging from my non-scientific friend and peer group, I doubt it.

What has turned me into, in some degree, an anti-interventionist, is the realization of the moral cost of war, especially prolonged war (or war-like states).  And, frankly, the moral cost is lost in schemas like that which Drezner has offered.  But while we could go around in circles in perpetuity on the economic and geopolitical cost-benefit analysis of “Clinton-esque” and “Bush-43” interventionisms, we either are or are not going to agree that war—especially prolonged war—poses a danger to (take your pick) the human soul, psyche, and/or moral core.  (Don’t they teach The Things They Carried to high schoolers and college kids, like, everywhere now?  Are people completely missing the point of Tim O’Brien’s entire literary career?  That it’s an exploration of the implications of war for the ability to be human?)  And denying that there is a moral toll of war—on society as well as on soldiers—is to forget how terrible war is, and to learn to love it too much.

For me, it has been the revelation of the corruption of this Forever War: torture, hollowing of language, subversion of core rights—those are the three key elements for me, in descending order of importance.  Torture is a moral rot distinct from all others.  For the Austrian/French intellectual Jean Amery, whose essay on torture should be required reading by anyone who wants to discuss, let alone debate, the subject, “torture is the most horrible event a human being can retain within himself.”

When arguments are offered defending torture as an essential part of the war effort, when torture and the broader war effort are corrupting our language, and when, over the course of The Forever War, we see a steady increase in the support of torture—until most of the nation, apparently, supports it—the only response I can muster is to say it is too much.  If The Forever War feeds not just moral rot, but this breed of moral rot, then it is time to quit.  One day, I will have children, and I fear for their growing up in a nation that practices and accepts torture more than in a world where Iran has a nuclear weapon.

The revelation of torture and the vehemence of its supporters is the revelation that the United States is not inherently good, but is good only by choice.  We can choose to be bad, to make the world a worse place, and perpetual war leads us in that direction.  Humanitarian intervention may at times be justified, may at times be necessary, but as a course of policy, the “smart intervention” of the 1990s only paved the way for The Forever War of the 2010s and beyond.  I don’t know whether this makes me “anti-interventionist to the point of isolationism.”  I can only hope that my opposition to intervention would crumble in the face of an Auschwitz—it would be my moral failure were it not to.  But even a just war will not leave the soul untouched, and responding with military force to every humanitarian crisis we as a nation witness will change us at our core.  It already has.

Iran and the Long(er) War

August 26, 2010

Jennifer Rubin commenting on the final paragraph of Bret Stephens’ recent Wall Street Journal piece on twenty years of US-Iraq military-something-or-other:

“Well, this would seem equally apt for the Thirty-One Years War that Iran has waged against the U.S. and the West more generally. Multiple administrations have done nothing as it waged a proxy war through terrorists groups against the West. Neither the Bush administration or the current one has responded to the deaths of hundreds of U.S. soldiers (and Iraqi allies as well) killed by Iran’s weapons and operatives in Iraq. Iran too has committed human-rights atrocities against its own people and defied UN resolutions.

So now we are faced with the threat of a nuclear-armed Iran that would, if it possesses nuclear weapons, certainly be emboldened to continue and step up its war on the West. The question for the Obama administration is whether to finally engage the enemy, thwart Iran’s nuclear ambitions, and commit ourselves to regime change. The chances are slim indeed that this president would rise to the occasion. But perhaps, if Israel buys the world sufficient time (yes, we are down to whether the Jewish state will pick up the slack for the sleeping superpower), the next president will.”

She’s jumping and running with the same two fallacies I noted in Stephens’ article: that there is no difference in kind between US-Iran relations from 1979-2010 and the kind of relations the two states would have after the beginning of outright war; and that it was clearly inevitable from the regime’s beginning that it would come to war—that, in other words, war was immediately and transparently inevitable from the beginning.

As with Iraq, these pre-war years have not been true “peace”: Iranians have taken Americans hostage, funding terror cells that have killed Americans, pursued WMD, and threatened a genocidal war against Israel.  The United States has sparred with Iran from time to time; has enacted an embargo; has condemned the regime as evil.  Perhaps this is war.  If it is, it’s a cold war.

And therein lies my problem with the rhetoric she and Stephens employ.  For 45 years, the United States went to great pains to keep the cold war with the USSR from turning hot.  Why?  Because there was an inherent, fundamental difference between the two: economically, practically, morally, and in simple terms of human life.  The adjective “cold” is in place for a reason: a cold war is something other than outright war.

This “cold war” between Iran and the US does not, of course, operate under the shadow of mutually assured destruction.  But an invasion of Iran would, let us say, be at least as bloody, at least as costly, at least as long, and at least as likely to not succeed (not to fail, mind you—simply not to succeed, to land in some weird grey area) as the war in Iraq.  From my best amateur’s guess, it would likely as not be significantly more so in most if not all categories.

Claiming that we have been at “war” with Iran for 31 years, eliding cold war and hot war, is an attempt to make irrelevant the questions: “Are the costs of entering into war with Iran too high?  What will we gain by doing so?  Is it truly necessary?”  If we’re already at war with Iran, those questions are irrelevant: they are questions to ask before the war.  The rhetoric strives to get us into war by pretending that we’re already in the same war that would occur were we to attack Iran.  After all, if we’ve been at war since 1979, then the debate over whether to start a war is moot.  We might as well just end the damn thing; it’s taken long enough, no?

*          *          *

I should make one comment: the reason I’m interested in this point is not because I want to go around shouting that Rubin and Stephens are being disingenuous, or that I’m concerned particularly with what either of them think.  It’s the language that I’m interested in, and that I find so striking—and, as far as I know, Stephens’ article — appearing on the not-quite-obscure WSJ Opinion page — was the first to push this linguistic version of events in Iraq (keeping one eye on Tehran), and Rubin, in addition to showing up conveniently in my Google Reader feed, makes the implicit explicit.

I’m concerned, that is, with what has concerned others before me: the hollowing of language by war.  It is still perhaps the most striking concern of Thucydides’ great work:

“Words had to change their ordinary meaning and to take that which was now given them.” (3.82.4)

The Long War

August 25, 2010

According to Bret Stephens in the Wall Street Journal, the United States has been at war, essentially, for my entire life; according to Jennifer Rubin at Commentary, we’ve been at war since the Iranian hostage crisis (if not slightly earlier).  While I’m not at all displeased to see even the supporters of The Long War inching towards acknowledging it for what it really is, there’s a problem with this line of conversation.

First, let’s look at Stephens’ definition of “war by another name”:

“In that box, he killed tens of thousands of Iraqi Shiites, caused a humanitarian crisis among the Kurds, attempted to assassinate George H.W. Bush, profited from a sanctions regime that otherwise starved his own people, compelled a ‘no-fly zone’ that cost the U.S. $1 billion a year to police, defied more than a dozen U.N. sanctions, corrupted the U.N. Secretariat, evicted U.N. weapons inspectors and gave cash prizes to the families of Palestinian suicide bombers.”

The worst of these are crimes against humanity, and shouldn’t be trivialized.  The no-fly zone, yes, was an example of the growing role of the United States as the world’s police during the 1990s.  On the other hand, a billion a year compared to the $2 trillion price tag of invasion, occupation, and security seems like pocket change.

That price tag is indicative of something important running through the piece: a refusal to acknowledge a difference in kind by re-labeling what occurred during the 1990s.  This is revisionism.  Stephens pretends that there is no difference—in terms of human and capital cost, in terms of social change, in terms of government—between a “military effort designed to contain Saddam Hussein and a military effort designed to replace him.”  Enforcing no-fly zones and an invasion-turned-occupation that is in its eighth year are essentially different.  Perhaps we have been at war with and in Iraq for essentially my lifespan; but the “war” that ran through my elementary school years was nothing like the war that began shortly after I entered high school.

(That I feel it necessary to use scare-quotes around one use of the word “war” in the previous paragraph points toward something particularly sinister about, among other things, The Long War: its corruption of language.  How do we distinguish between the War in Iraq and the semi-militarized 1990s — which saw American troops in Iraq, the Arabian peninsula, the Balkans, Somalia, etc.?  It was “peacetime,” I suppose, but with something wholly other lurking at the horizon.)

Stephens’ commits himself to another such assumption in the piece: that there was, really, no choice in the matter when it came to the 2003 invasion.  This is already implied by the idea that there is no difference between 1992-2003 and 2003-present.  The way that it had to end was with a full-on invasion and replacement of Saddam Hussein.  This is patently false.  Consider Cuba and Fidel Castro—admittedly not a Saddam Hussein, but he has starved his own people, attempted to acquire WMD, and, at one point, was subjected to, essentially, a “no-sail zone” around his island.  Our policy since the Bay of Pigs, for better or worse, and in varying forms, has been one of containment, content to wait on the natural regime change of human mortality.  (Is that the best policy?  That’s not the question at the moment.)  But there is a choice – and that makes all the difference in the world when one attempts to assess the landscape and create future policy.  Only in a world where there was no true choice in 2003 could this paragraph be written, with one eye on Tehran:

“One thing is clear: The Twenty Years’ War lasted as long as it did because the first Bush administration failed to finish it when it could, and because the Clinton administration pretended it wasn’t happening.  Should we now draw the lesson that hesitation and delay are the best policy?  Or that wars are best fought swiftly to their necessary conclusion?  The former conclusion did not ultimately spare us the war.  The latter would have spared us one of 20 years.”

Stephens’ history, Rubin’s post, and their implications concerning Iran will be the subject of a near-future post.

Let me preface this by saying that the only news I’ve heard in the last week involves either the Final Four (Tom Izzo, go kick some Tarheel ass for me, would ya?), North Korea (really guys?), and that apparently Ichiro has a bleeding ulcer (ouch).  Now that we’ve gotten that out of the way, the New York Times on the Iowa gay marriage ruling:

“The new decision says marriage is a civil contractand should not be defined by religious doctrine or views.” [Emphasis mine — JLW]

Which is to say, the reason I’m wary of court decisions — as opposed to legislative action or ballot initiatives — encapsulated.  Having ctrl-F’d the decision itselffor the word “contract” (three of my last seven days have involved air travel; go easy on me), it appears the editorial is working from this passage:

“This contrast of opinions in our society largely explains the absence of any religion-based rationale to test the constitutionality of Iowa’s same-sex marriage ban. Our constitution does not permit any branch of government to resolve these types of religious debates and entrusts to courts the task of ensuring government avoids them. See Iowa Const. art. I, § 3 (“The general assembly shall make no law respecting an establishment of religion . . . .”). The statute at issue in this case does not prescribe a definition of marriage for religious institutions. Instead, the statute declares, “Marriage is a civil contract” and then regulates that civil contract. Iowa Code § 595A.1. Thus, in pursuing our task in this case, we proceed as civil judges, far removed from the theological debate of religious clerics, and focus only on the concept of civil marriage and the state licensing system that identifies a limited class of persons entitled to secular rights and benefits associated with civil marriage.” [page 65; emphasis mine — JLW] 

Now, I don’t have a problem with the opinion of the court that they should approach civil law as civil justices.  The problem is that the means require they modify marriage as a contract, as it is as defined by state law.  But this codification of marriage does not encompass the entirety of marriage (and was not meant to be more than a legal approximation) — marriage is a societal institution.  The court has no authority to treat it as such an institution — it must treat, and modify it as defined by the law: that is, as a contract.  Nothing more, nothing less.

Only society itself can modify marriage as a societal institution.  The citizen body votes on a ballot initiative; their elected representatives — with authority that stems from society — pass laws.  The law itself, I hear you say, defines marriage as a contract, etc.: but again — the definition of marriage as a societal institution goes beyonds the limits of the legal code’s authority.  It is a social institution; it is part of the tradition; the tradition is not simply what is defined by government.  The institution can only (? is best?) understood within the legal code as a contract: that does not mean that, as a societal custom, it is nothing more than a contract.  It is, then, a legal approximation of marriage.

So back to the original point.  The difference between a legislative means and a judicial means is how marriage is (must be) treated: as a social institution, or as a legal contract.  This is a problem — or something to cause a touch of worry — only if you believe, as I do, that to define marriage socially as a contract is to devalue it.  The decision doesn’t do that, and nothing I’ve said should be taken as any sort of comment about the validity of the decision itself.  But the decision does give the Grey Lady cause to declare in an editorial: “marriage is a civil contract.”  The thing to be wary of is that the approximation becomes the meaning.  If that happens, I think a lot of people who have fought a very long time for gay marriage will look around one day and realize that what they won was only an approximation of what they wanted.

Now on to the things that matter in life, like Opening Day.

(N.B.: This post is primarily a thought experiment.  For the sake of that experiment, I occassionally take things for granted. — JLW, 4/6/09)

Scott Payne’s “Twenty-First Century Conservatism” is well worth a read, even if you wind up disagreeing with all of it. While I’m more prone to staying way up high in the ether and not really wanting to muddy my hands with how all this abstract stuff, you know, applies to politics, he goes there, or makes a start of it, which is helpful. Among his points:

Critically embrace tradition: a conservatism of the twenty-first century doesn’t need to cut is umbilical cord to tradition altogether, by my lights. In fact, conservatism’s connection to tradition is potentially one it’s strong points in a world increasingly loosened from any moorings. But conservatives need to find ways of embracing those traditions with a critical eye and be prepared to let go of traditions that no longer make any sense. This post by Will Wilson that keep going back to on engaging self-reflective traditions is the key here and I keep waiting for Will to pick that line of thought back up on move it forward a couple more yards, but it’s somewhere to start. This links in to some degree with my comments around culture and is, in many sense, a more full-bodied approach to reform in this regard, but I think there is a whole separate project and element to the ideology at work here that speaks to one of the core planks in conservative identity, so I’m loathe to mash the two together.”

There’s a danger in a self-conscious tradition, and a tradition in which it’s acceptable to toss off a limb for the sake of the whole — traditions, in addition to being billion-headed rabbis (not letting that analogy go, folks), are like starfish: limbs re-grow after time. (But a limbless tradition, like a limbless starfish, is less likely to survive: it’s probably more a danger with tradition than a starfish.)

The problem, on the other hand, with an ossified tradition is that it has ceased to live and lapsed into reflexive (more or less) dogma. An ossified tradition fails because the existence of a tradition within history inherently causes changes to the circumstances of that tradition — and that can necessitate changes to the tradition itself. To borrow (again) from Eliot’s imagery, the creation of a new work of art, by its existence, alters the relation of all previous works of art within the tradition to one another, even if imperceptibly.   Any tradition that is not dying or dead is a living tradition.

Or, to pull in Bringhurst (because I’ve been reading him):

“A myth is a theorem about the nature of reality, expressed not in algebraic symbols or inanimate abstractions but in animate narrative form.”

and,

“Because mythologies and sciences alike aspire to be true, they are perpetually under revision. Both lapse into dogma when this revision stops. . . . Where they are healthy, both mythology and science are as faithful to the real as their practitioners can make them, though evidently neither ever perfectly succeeds.”  (Robert Bringhurst, “The Meaning of Mythology” in Everywhere Being is Dancing)

Though not a perfect analogy, reading “myth” as “tradition” works to an extent. In a sense, the tradition is a theorem about the nature of reality, or how we should behave within reality, expressed through and on account of the acquired wisdom of prior generations. “Acquired wisdom”: it is human, not infallible. Burke was never a Tory, and that’s not irrelevant.

The danger lies in irresponsible revision of tradition to make it what we want, rather than understanding that for the tradition to be relevant and effective — for it to survive — it must speak to the moment. “Eipe kai hêmin” says the Odyssey’s narrator, invoking the Muse: “Speak to us in our time.”

The prime issue in matters of tradition and the critical re-thinking thereof is gay marriage. Scott links to this post of Conor’s, which lays out very nicely the Sullivan-esque “Conservative Case for Gay Marriage.” I want to reframe that argument in terms of what I’ve been saying — to demonstrate how that argument is not a “revision of tradition to make it what we want” but a necessary political adaptation of the tradition to allow continued relevance.

It goes into matters of truth and form. (I don’t like the use of the word “truth” there — it’s misleading, in its way. The type of distinction I want to get at is better described as that of “poetry” as the essence of a poem, and “verse” as the form of it.) Marriage, as a political/societal tradition has at its core the truth that it is essential for society that family units be officially bonded and recognized, and that children, if at all possible, be brought up in families (death can be a circumstantial complication here, however). The form that the tradition stipulates is a man and a woman. Society, however, has moved away from that form, and – if divorce rates are to be allowed to speak their meaning – away from the idea of marriage in any form as much more than a legal contract. (My opinion of divorce is hardly Catholic, but when divorce rates are at 50%, it’s hard to make the case that marriage hasn’t been devalued somehow and that the stability of the nuclear family has been jeapordized.)

The move to make, such thinking would say, would be to alter the form to better preserve the underlying truth within society. That is to say, expand marriage to include same-sex couples, but make it clear in doing so that it is not because marriage and family mean whatever we want them to mean, but because of the importance of family in stable form to society.

And this all leads up to my objection to the idea of removing government from “marriage” altogether and calling everything a civil union. It defeats the purpose of expanding marriage to defend marriage and the nuclear family: in fact, it only devalues the idea of marriage by having the government declare that, for all political and society purposes, marriage is nothing more than a contract. Understanding marriage as something divorced from family (this should not be taken as saying that a valid marriage must produce children, or somesuch thing) is far more damaging to marriage, and certainly more against the tradition than altering the traditional form of marriage.

I used marriage as an example of how the logic of this understanding might work – disagreement with its appropriateness on this issue particularly shouldn’t be taken to mean that it is, generally, inapplicable.

Another caveat: I’m talking here purely about the tradition in political/societal terms. None of this applies to religious or religious/societal understanding of tradition, but the religious tradition and the political tradition oughtn’t be allowed to merge. The Pope need not compromise because of popular sentiment – there’s a strong case that he shouldn’t (but I’m not Catholic, so I’m staying away from actual Catholic issues; he just works as a nice example). One is about the relationship among humans, and the stability of society, and, because of this, is far more mutable; the other is about the relationship between man and G-d (and only after this the relationship among men) and because of this far more immutable.

Doubt, Faith, the Bible

March 3, 2009

Rod Dreher linked to this article by way of Biblical literacy, but what I found truly striking was Plotz’s confession of his struggles with God and the idea of God during and after reading the Bible:

“I began the Bible as a hopeful, but indifferent, agnostic. I wished for a God, but I didn’t really care. I leave the Bible as a hopeless and angry agnostic. I’m brokenhearted about God.”

Then he ends by noting something important:

“As I read the book, I realized that the Bible’s greatest heroes-or, at least, my greatest heroes-are not those who are most faithful, but those who are most contentious and doubtful: Moses negotiating with God at the burning bush, Gideon demanding divine proof before going to war, Job questioning God’s own justice, Abraham demanding that God be merciful to the innocent of Sodom. They challenge God for his capriciousness, and demand justice, order, and morality, even when God refuses to provide them.”

Those Biblical figures called heroes, and pillars, and faithful, and righteous: they doubted. They struggled. Faith did not preclude that; those titles that they earned — could they have earned them without doubting? Would Abraham have been Abraham had he not negotiated for Sodom and Gomorrah? Moses, lost in the desert, doubted and struck the rock in his own name when bringing forth water.

There is an argument to be made that the struggle with doubt is among the most important aspects of faith. I think I would believe that, though I don’t harbor any presumptions about being the one to make it. But we oughtn’t confuse faith with mere belief — the one is a component, an aspect of the other, though integral to it.

At Jewcy, Ben Cohen writes:

“Our view of history — more precisely, the way in which we remember the recent past in the public domain – generally tends to be cluttered by the political imperatives of the present.”

What he’s more specifically talking about is genocide, which has developed different definitions, depending on the situation you’re referring to. In the abstract, legal sense, it

“means any of the following acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group, as such:

(a) Killing members of the group;
(b) Causing serious bodily or mental harm to members of the group;
(c) Deliberately inflicting on the group conditions of life calculated to bring about its physical destruction in whole or in part;
(d) Imposing measures intended to prevent births within the group;
(e) Forcibly transferring children of the group to another group.”

That’s very roughly what’s meant when it’s used in relation with the Holocaust – and that would make sense, as this definition was formulated as a result of the atrocities of World War Two. Contemporary political conversation, however, has seen “genocide”

“recast as a ‘civil war in which all sides are committing atrocities’ and, equally, ‘a nasty regional conflict in which culpability can be distributed among several parties’.”

Again, roughly speaking: he’s talking about Darfur, which is commonly called a genocide despite it’s failure to meet the legal definition; and he’s talking about those who refer to Israel as genocidal.

What’s particularly interesting here is the interplay between past and present: altering the definition of a word alters a past action, if that action was inextricably linked to that word (as is the case, I feel safe in claiming, with the Holocaust and genocide). But using that word in a contextually inappropriate present situation applies the older/original definition to the present: the perceived reality of the present is also altered.

That may seem contradictory: present redefining past and past redefining present, the word itself taking on two different yet simultaneous meanings. But imagine it as more akin to the word, used in two different context, flattening the differences between those contexts. It isn’t that one becomes the other, but that both are shifted toward a mean, and neither remain what they objectively are/were.

Or, to bring in the requisite line of Orwell:

“He who controls the past controls the future. He who controls the present controls the past.”

See Also: “Notes on Meaning and Language”

I suppose it makes sense, when you stop to think about it (even when your AP Biology quiz on the hormone cycles of human reproduction sealed your future career as not-an-OB/GYN), that having octuplets would carry with it certain dangers— mostly because, I suppose, the odds on naturally-occurring octuplets are longer than the odds when you’re implanting multiple embryos.  And while it appears the solution with the most support is reasonable (restricting the number of embryos implanted at a given time), that this is even being tossed out as a serious option is disturbing:

“Rosenthal, on the other hand, questions the woman’s capacity to make a good decision under the circumstances. Some neonatologists believe that when pregnant women are told about dangers of prematurity or have great expectations about giving birth, their judgment can be impaired, she said.

The situation raises the issue of whether a doctor ought to override a patient’s wishes for the sake of saving lives, she said. Although the health care system in America gives patients autonomy in making decisions about their own bodies, when emotionally distraught, some people decide poorly, she said.” [emphasis added – JLW]

The case the article was discussing involved a woman refusing “selective termination” (which, I have learned, “is not the same as traditional abortion because the goal is the healthiest possible birth rather than the termination of a pregnancy”).  That is to say, there are doctors out there, taken seriously by at least CNN, who think they ought to have the right to force an abortion.  Which seems to be against the spirit of the wood planks tied to a tree in the middle of campus proclaiming, “Choice Today!  Choice Forever!”  (The decorative condoms have deflated.)

What’s worth complaining about more than the abortion aspect (because I’ll either be shouting into the wind or preaching to the choir, depending on who’s reading), or the Orwellian euphemism (self-evident), is this attitude of cold-blooded “rationalism” and tyranny of “expertise.”  The mother is behaving unreasonably because she’s not willing to make a value-judgment about human life — that the conclusion drawn is that she is “emotionally distraught” flattens out the entire moral and — yes — emotional matrix behind the decision.  It is a matter of numerical, utilitarian preservation, not adherence to what anyone might believe is a more important truth behind the matter.

Remember Obama’s response to an abortion question at Rick Warren’s interview-thing: above his pay-grade.  This doctor certainly agrees, with respect to the patient, that it is above their pay-grade: but precisely at her own.  She, not the patient, is the expert; she, not the patient, should make all decisions.  Because of her expertise, her moral system supersedes that of the mother.  The individual self and that self’s moral matrix is consumed by that of the doctor: the individual is there to go on living on a physical level, because that’s apparently what Nature and Science call for, but since living on spiritual, moral, and intellectual planes interfere with that, we must outsource.  The reason a patient’s right to control their own body is so important is that it is also the right to control one’s own self: this would seem doubly (yet differently) so in the case of a mother and the right to protect her children (born or unborn, and call them what you will in the latter case) — is there no parental prerogative?  And what would the absorption of that prerogative into the realm of “expertise” mean except that the role of parent — with the requisite individuality — is being absorbed into an outsourced expertise?

Or, to conjure that space-travelling Percyian to make his (as is frequent) all-too-human point:

MCCOY: “Dear Lord, do you suppose we’re intelligent enough to…. Suppose…what if this thing were used where life already existed?”

SPOCK: “It would destroy such life in favor of its new matrix.”

MCCOY: “It’s new matrix. Do you have any idea what you’re saying?”

SPOCK: “I was not attempting to evaluate its moral implications, Doctor. As a matter of cosmic history, it has always been easier to destroy than create.”

MCCOY: “Not anymore, now we can do both at the same time. According to myth, the Earth was created in six days. Now watch out, here comes Genesis. We’ll do it for you in six minutes!”

SPOCK: “I do not dispute that in the wrong hands…”

MCCOY: “In the wrong hands? Would you mind telling me whose are the right hands, my logical friend? Or are you, by chance in favor of these experiments?”

KIRK: “Gentlemen, gentlemen…”

SPOCK: “Really, Dr. McCoy. You must learn to govern your passions. They will be your undoing. Logic suggests…”

MCCOY: “Logic? My God, the man’s talking about logic. We’re talking about universal Armageddon!”