Why I like old hymns and their old lyrics

In yesterday’s post I mentioned that I grew up in a church where we sang hymns and then went to a youth group where we sang modern folk-inflected praise songs. Might as well write a bit more about church music today, while I’m on the subject.

The first thing I’ll say is that I’m not here to take sides in the hymns-versus-praise-choruses debate. People connect with God through many different kinds of music, including not only the stately majesty of hymns and the creative energy of praise choruses but also the lively immediacy of Gospel songs, the lavish richness of classical choral compositions, the spare simplicity of monastic chants, and plenty of other forms. More than one of those musical forms has helped me connect with God myself at one time or another, and if hymns resonate a little more deeply with me, that’s just me; other folks have the same response to other kinds of music. That’s one of many things that makes the diversity of the worldwide church so enriching and exciting.

But I do want to write about hymns a little bit today, and the thing I want to address is the modernization of the lyrics. It doesn’t happen often, but it happens, and I usually detest it.

See, one of the things I like so much about hymns is their theological density. The verses of a good hymn can bore through the layers of a theme or unfold a progression of ideas in a way that fires my imagination and gets my spirit–which, for me, has a close relationship with my brain–eager for God to reveal himself. (Again, that’s what hymns do for me; other musical forms grab other people, and sometimes also me, in other ways that are also good.)

But sometimes modern Christians insist on revising the words so the hymns say something different. I’m not talking about writing a new refrain to insert between the verses of a classic hymn, though for purely aesthetic reasons I’m not always crazy about that practice either; I’m talking about rewriting the words of the hymn itself. Why some Christians do this, I don’t know, and it probably varies from instance to instance. Maybe they see a word they don’t really understand, so they figure that to make the hymn comprehensible to them, or maybe accessible to non-Christian visitors to their churches, they have to rewrite the line to remove the offending word. Maybe they see a theology they don’t recognize, so instead of trying to understand it they replace it with another one. Maybe–and this one I can sort of accept–they see gender-specific language in a lyric that’s meant to apply to all humans everywhere, so they try to expunge it for the sake of inclusivity.

Whatever the reasons, the classic instance of this problem, for me, is coincidentally (and irritatingly) one of my favorite hymns: “Come Thou Fount of Every Blessing,” usually sung to the tune “Nettleton.” The version I grew up singing had this second verse:

Here I raise my ebenezer;
Hither by Thy help I’m come;
And I hope by Thy good pleasure
Safely to arrive at home.
Jesus sought me when a stranger,
Wand’ring from the fold of God;
He, to rescue me from danger,
Interposed His precious blood.

Robert Robinson’s original 1757 lyrics were in a different order, so perhaps in inveighing against monkeying with hymn lyrics I’m a bit hypocritical. But this version, even while reordering them, leaves the ideas themselves essentially intact. What doesn’t is this late-20th-century revision:

Hitherto thy love has blessed me;
Thou hast brought me to this place.
And I know thy hand will bring me
Safely home by thy good grace.
Jesus sought me when a stranger,
Wand’ring from the fold of God;
He, to rescue me from danger,
Bought me with his precious blood.

To begin with, the first four lines are completely rewritten. Now, I understand that nowadays the only reference point people have for the word “ebenezer” is Ebenezer Scrooge. But the word itself, derived from Hebrew, means “stone of help” and is used in I Samuel 7:12 as the name of a monument Samuel erected to commemorate a victory over the Philistines, won with God’s help. (Side note: we should totally start using “ebenezer” as a normal word again.) The idea of a journey and a return home is preserved, but in addition to replacing God’s help and pleasure with his love and grace, the change deprives the hymn of both a vivid image and a Biblical allusion.

The other major change to this verse is in the last line, where “interposed” is replaced with “bought me with.” These are two very different understandings of the cross. The newer version is economic: there was a price to be paid for me, and Jesus paid it, using his blood as the currency. But the older version has an immediacy to it that the newer one doesn’t quite capture: death or some other danger was making straight for me, but Jesus stepped between it and me so that it would strike him instead. In the newer version, Jesus pays a ransom for me; in the older version, Jesus takes a bullet for me.

Mind you, the new lyrics are not theologically incorrect. It’s not as though the hymn used to be true and now isn’t. It’s that the hymn used to say one true thing and now says a different true thing instead. The first true thing, which may have fit well with 18th-century theological emphases and sensibilities, has been replaced by a different true thing, which fits well with 20th-century theological emphases and sensibilities. But maybe it’s because the revision fits so well with current thinking that we should keep singing the old, un-revised version to remind us of other truths that we too easily forget because we don’t encounter them as often.

That’s one of the reasons I like hymns. And it’s one of the gifts that, at their best, they can offer the whole church–including the people who prefer praise choruses.

Christian rock: How I started listening and why I started listening less

A couple of weeks ago I said I might write a post about how I got into and then back out of listening to Christian music, so here I go.

I grew up in a church that sang hymns. I knew of a few Christian songs in other genres, like the folk-ish kinds of things you’d hear someone strumming on a guitar in the ’60s or ’70s, and I was dimly aware of something called “gospel,” but most of what I knew was what you’d find in a standard white mainline Protestant hymnal. But then I started attending Bible studies and other events for a youth group at one of those suburban megachurches, where we sang what I can best describe as newer songs in the tradition of the ’60s and ’70s folk stuff. It was fun, but not particularly earth-shaking.

Until the night they showed us the music video for Michael W. Smith’s “Secret Ambition.” (Warning to sensitive viewers: The linked video includes brief scenes of flogging and crucifixion. Also, a mullet.) This was the first exposure I ever had to Christian rock. I’d never believed (or been taught) that you couldn’t combine Christian themes with rock music; it just hadn’t crossed my mind that you could.

Shortly after that I learned about Petra (Warning: More mullets. I’ll stop with the video links here, though.), then others. My high school days brought me Audio Adrenaline, the Newsboys, DC Talk (of course), Jars of Clay, the O.C. Supertones, and Caedmon’s Call. Then in college I joined one of those CD clubs they had in those days, where you get a dozen CDs for the price of one and then every month they send you another one in your genre of choice and you can either pay for it or send it back. That CD club was focused on Christian music, and through it I discovered the likes of Skillet, Bleach, Guardian, mxpx, Burlap to Cashmere, and Room Full of Walters. I even had a show on the campus radio station one semester, playing Christian rock for the, like, two people who were awake and listening to it at 6:00 on a Wednesday morning.

I should point out that I never adopted the mentality that a Christian should only listen to Christian music. But I was impressed with the variety and quality of some of the acts, and I liked the music and lyrics I was hearing.

But college was also a time to start thinking about life more deeply than I had done so far, and Christian music was part of my life so I started thinking more about it.

At first the question was merely one of taste. I’d heard enough Christian music I didn’t like that I came up with a rule of thumb: if I wouldn’t like the music of a given act without the Christian-themed lyrics, I wouldn’t bother with it as a Christian act either. That was simple enough.

But then the questions started getting more complicated. A woman I knew wouldn’t let her teenage son listen to the Christian heavy metal he liked because the lyrics weren’t clearly audible. Does music not count as Christian if the words aren’t distinct? Another friend pointed out that the applause at Christian artists’ concerts might not all be given to God. Is an artist’s message invalidated if some of the fans are applauding the band instead of God–or if the artist is accepting some of their applause for himself? I found myself less willing than some of my friends were to reject most music as Christian over issues like those, but the questions still troubled me.

The questioning continued after college. An article about author Reed Arvin’s struggles to find a Christian publisher for his novel got me thinking about Christian music in the context of the larger Christian publishing industry. A book about the spiritual journey of U2 and a couple of interviews with Bruce Cockburn (the sources for which I can no longer find), which included Cockburn’s and some of the U2 members’ thoughts about deciding whether to identify as Christian artists, pushed that issue further. I came to understand that, in their capacity as gatekeepers, publishers can legitimate or exclude certain voices and could restrict the kinds of decisions artists made about the lyrical content, the auditory balance between music and words, and so on. And marketers, culture-makers (like the reviewers who wrote for Christian magazines), and retailers shape both production and consumption along similar lines. It becomes easy for Christians to sequester themselves in a sort of pop-culture silo, hearing little from outside and being heard by few who aren’t their own.

On its own, those insights might have made me appreciate how the cultural gatekeepers can protect Christians from unwittingly hearing anything that might be a bad influence (although that kind of protection isn’t something I’m inclined to appreciate to begin with). But couple them with some theological points that I was beginning to understand–namely, that all truth is of God regardless of whose mouth speaks it and that creativity and excellence honor God simply by virtue of creatures made in his image using gifts he gave them to create and to create well–and I was bound to start seeing the Christian gatekeepers as restricting the artists, sanitizing the art, and inhibiting the integrity of their art and their messages. And I didn’t want to restrict my own art consumption that way, to implicitly validate the pop-culture silo’s restrictions, or to privilege Christian art over art that took other forms or spoke other messages.

So I backed off a bit. I didn’t stop listening to Christian music–and in fact I still listen to most of the CDs that meant something to me when I was in high school or college, and they still speak to me–but of the new music I’ve bought in the last twelve years, none has been Christian.

And it’s been interesting to notice the truth and beauty I’ve been finding in other places. Maybe eventually I’ll write a post about that, too. But no promises.

Radio: a complaint, a defense, and a plea

A few posts ago I was writing about music, habitual music listening, and the fear of silence. I mentioned a radio station I was forced to endure at an office job I held for a few years and posited that I might write a post about that station. Here’s that post.

Which radio station it was is not important. If you live pretty much anywhere in the English-speaking world, your radio dial can tune in at least one station like it; if you live in the U.S. or Canada, you can probably get half a dozen. You know the ones: they claim to play “the best mix of today’s hits and yesterday’s favorites,” and most of them, including this one, have something they call a “no-repeat workday.” You hear these stations a lot in workplaces, especially workplaces with large populations of middle-aged white people, because the music they play is generally thought to have the broadest appeal, or at least to offend the sensibilities and tastes of the fewest people.

Bull.

Now, it’s not that the music (well, most of it) is actually bad. Some of it is, of course, but the bulk of it is pretty mediocre, and some is even quite good. No, there’s another whole set of problems, which tie into the fact that a radio station is a business whose customers are not listeners but advertisers and whose product is not music but the ears of the listeners.

One way this fact plays out is that the disc jockeys don’t actually choose what songs will be played over the air. In fact, the DJs aren’t even necessary, except to maintain the illusion that there are still humans in a studio deciding how best to entertain their listeners, the way radio used to work decades ago. No, the songs are chosen by computer algorithms and compiled into lists that can be played entirely automatically. The computer algorithms calculate the selection and arrangement of songs that will induce the largest number of people (or at least the ones in key demographics) to tune in the station and leave it tuned in. Different radio formats–country, classic rock, pop, and so on–draw from different pools of songs, but they all work basically the same way. The “variety” these stations promise the listeners is in fact carefully controlled.

Try making a list of the songs your local station plays during an average workday. Then do the same the next day, and the next. Then compare the lists and see how many songs are on all three. Then note the number of songs that get played at roughly the same time every day, give or take a few minutes. You’ll see what I mean.

I will cop to liking ABBA. I will cop to liking “Dancing Queen,” though I prefer some of their other songs, if only because I hear them about a twentieth as often. But when the radio station at my former workplace played “Dancing Queen” sometime between 10:20 and 10:40 every morning for six straight weeks, I had quite enough.

This brings me to another point: the con that is the “no-repeat workday.” Ostensibly this is another promise of musical variety, assuring you that, even if a given song is a big hit, you won’t have to hear it over and over during the day. But that’s only sort of true, because most stations’ “no-repeat workday” lasts from around 9:00 to around 4:00, but most people’s actual workday (well, OK, mine at that job) ran from 8:00 to 4:30. So what would happen was that we’d hear a certain hit song between 8:00 and 9:00, then again between 9:00 and 4:00, then a third time as soon as 4:00 arrived and the restriction of the “no-repeat workday” was lifted.

Even so, the “no-repeat workday” might well have “[made] your workday go by faster,” as the station’s jingle claimed–if not for the fact that its effect on the workweek was precisely the opposite. By playing even a handful of the same songs at the same times every day (e.g., the “Dancing Queen” fiasco), the station made every workday blend into every other and thereby only increased the soul-crushing monotony of the job. (Oh, right, forgot to mention: I wasn’t terribly keen on this particular job to begin with.)

I did get a little bit of relief at 3:00, when the two co-workers with whom I shared workspace left and I was free to change the station on our radio to whatever I wanted, so long as I kept the volume reasonable. I usually opted for classical, because it was far away, musically speaking, from what I’d been subjected to up to that point.

Anyway, I noted earlier that the business of radio works in particular ways, and that leads me to my final point: we can demand better, and we should.

Radio at its best can give our days and nights wonderful soundtracks that surprise us and expose us to new music that we might not have sought out but that grabs hold of us anyway. I’d never heard Counting Crows’ “Long December” until I turned on the radio one night when I was feeling low and the song gave the perfect voice to my mood. Then there’s Bach’s Brandenburg Concerto #6, which I heard for the first time one afternoon at work–and drove to a store and bought on CD as soon as I clocked out. An mp3 player I loaded myself couldn’t have done those things, but a radio station could. Radio can be great.

But so many radio stations use this horrible operating model, and they do that because it still draws listeners, so it still draws advertisers, so they have no incentive to change. But there are a handful of radio stations, including public ones and a few others, that operate on different models and that still use live DJs to select music. So find those stations and listen to them. Because if enough of us abandon the stations that use computer-created playlists that vary so little from one day to the next and listen instead to the stations that use live DJs who remember what “variety” actually means, perhaps the advertisers will take notice, and then perhaps they’ll encourage other radio stations to change their ways.

And then maybe, just maybe, radio will be worth listening to again.

What How I Met Your Mother and David Copperfield have in common

Because it’s been on my mind since it aired Monday night, I want to say a few words about the How I Met Your Mother series finale and a storytelling motif I was really disappointed to see it use. Stop reading now if you don’t want the ending spoiled. Also, stop reading if you don’t want the ending of David Copperfield spoiled, because that comes up, too.

 

***   SPOILERS AHEAD!   ***

 

In case you haven’t seen the finale (or the series) and are reading anyway because you’re curious where I’m going, a brief summary of the series and its finale:

HIMYM followed the lives of a single guy named Ted and four of his close friends, using voiceover narrations by a decades-older Ted to his teenage children as a frame device. Throughout the show’s nine-season run, Ted repeatedly dated, fell in love with, broke up with, and pined for Robin, one of the other four central characters. We knew their romance was doomed in the first episode, when narrator-Ted identified this wonderful woman he’d just met as “your Aunt Robin,” and one of the reasons they broke up was that Ted really wanted kids and Robin really didn’t. Still, he spent much of the show’s run agonizing about whether or not he should be with her anyway, even when one or both of them were in relationships with other people.

In the finale of the penultimate season, viewers were given their first glimpse of the Mother, who became a regular character in the final season before Ted finally met her in the finale–following the wedding of Robin and Barney, another of the four. A series of brief flashbacks recounted Ted’s relationship with the Mother (whose name is revealed to be Tracy), Robin and Barney’s divorce, and Tracy’s illness and death, among other events.

Ted’s children picked up right away on the fact that their father’s excessively long story about meeting their mother didn’t actually include their mother until the very end–and that it began with Ted’s meeting of Robin. They pointed out to their dad that he and Aunt Robin had always seemed pretty keen on each other whenever she came over for dinner, and they encouraged him, now six years a widower, to finally ask her out. Ted went to Robin’s apartment, made what she would recognize as a romantic overture, they smiled at each other, fade to black.

Now then.

In fairness to the creators, the finale provided a satisfying answer to the question of why, in his story that was ostensibly about how he met his kids’ mother, Ted didn’t actually meet their mother until the very end. I’ll give ’em that.

But that was pretty much the only thing I found satisfying about the way the series ended.

When it was revealed in the first episode that Robin was not the kids’ mother, and even moreso when it was revealed a few seasons later that she would eventually marry Barney, I was quite happy at the news. There are already plenty of love stories out there wherein two people meet, fall in love, maybe have a few problems, but eventually marry and live happily ever after–and that’s good. But those stories already exist. I was eager for a tale in which what seemed like the One True Couple would not end up together, especially as a result of their own choices rather than external circumstances, and in which they’d each have to seek–and find–happiness elsewhere. That kind of tale would force the characters to wrestle with complex emotions and to make choices about which ones to feed and which ones to starve, and that’s the direction I was really hoping HIMYM would go.

But it didn’t.

No, it kind of pulled a David Copperfield on us. The hero married somebody other than his putatively obvious best match, and after a few years she died of an unnamed illness, leaving him free to marry the woman whom the fans thought he should have been with all along. Tracy was a really good match for Ted while Dora turned out to be a complete idiot, so Tracy’s death grieved Ted deeply while Dora’s struck David as more relief than tragedy, but both endings bothered me because they both basically granted the One True Couple a romantic do-over. The heroes got to be with the romantic partners they should have had (according to the fans) all along; apart from Ted’s grief at Tracy’s death, the negative consequences of their relationship choices didn’t stick. (Ted Kissell writes something very similar about Steven Moffat’s handling of Doctor Who.) A deus ex machina turned what was building toward a moving, emotionally complex, and unusual ending into something almost jarring in its blandness.

And that’s what I found most unsatisfying. The characters and their story deserved better.

On Walkmans and habitual music-listening

Confession time: I do not own an mp3 player.

Nope, if I want to listen to music, I have to be in my car or at my computer or someplace I can plug in my small but better-than-nothing boombox.

And I’m OK with that.

I had a Walkman or two when I was growing up, but since then I’ve never really felt the need to own a portable music-playing device. For those of you who don’t remember the 1980s, a Walkman was a portable battery-operated gadget that usually included both an AM/FM radio and a cassette player and that allowed you to listen to music through headphones so you weren’t inflicting Motley Crue or LL Cool J or Weird Al or whoever on the people around you. Walkmans went through AA batteries faster than a competitive eater goes through hot dogs, and they usually ate tapes after too many plays as well, but we still thought they were a pretty neat thing to have.

(Also, a cassette was a rectangular plastic thing that could hold a recording of music or anything else audible on a long strip of magnetic tape that wound from one spool to another within the cassette. Cassettes broke a lot, but they were versatile and convenient for their time.)

But the thing was, I didn’t actually use my Walkman all that often. If I wanted to listen to music at home, I’d use a radio or boom box or the home stereo or whatever was convenient. And it wasn’t often that I wanted to listen to music on the go, either–though on family car trips the Walkmans our parents bought my sister and me (probably for this very reason) spared us many a squabble over control of the car stereo. I like music plenty, but most of the time my music jones was satisfied by what I’d heard on the radio as I was waking up and what I’d play on the stereo when I got home from school, and I didn’t need much else.

One thing I noticed about Walkmen was that headphones tended to isolate people. Any given person who was wearing a pair may or may not have been trying to cut off human contact, but people in general were much more reluctant to try to speak to headphone wearers than to other folks. Sure, sometimes not having to deal with people is nice, but most of the time I felt like I wanted to at least be available and approachable if somebody (or at least somebody I didn’t actively dislike) wanted to talk to me. Even today, on the public buses, earphones send a signal that a person doesn’t want to be approached. I half suspect that some people just wear the earphones without even listening to anything, just to ward off other humans.

Another thing I noticed was that, for some people, listening to music or at least having some playing in the background seemed to be a compulsion, a need. The first place this became painfully apparent to me was when a high-school friend confessed to a group of us that she listened to music all the time because she was afraid of silence. It hadn’t occurred to me that silence was a thing you could be afraid of, but she got me thinking about why someone might have that fear and what it must be like. The second place was in one of my first post-college jobs, which was in an office where we were subjected all day to a certain local radio station, probably chosen for its inoffensive song selection. (Note to self: “How that station represents everything that is abominable about radio today” would make a good topic for a post.) It was as though the denizens of the office had collectively agreed that even tiresome repetitions of the same grating pop hits were better than an hour without music (or even, heaven forbid, different music).

Now, different people like different things, and I’m not out to criticize anyone’s taste in music or anyone’s preference for where or when or how often they listen to it. There’s nothing at all wrong with Walkmans or radios or mp3 players or music. Do not take my own habits as either typical or prescriptive; you do what works for you.

What I do want to encourage you to do is to think about your music-listening habits and why they are what they are. Do you typically listen to music simply because you want to, or because you feel like you need to? If you feel like you need to listen to music, why do you suppose that is? Does it provide some benefit, like helping you stay awake or motivated, or is it a way of avoiding something, like silence or conversation? (Some needs, after all, are legitimate needs; others are symptoms of a bigger problem.)

If listening to music is simply a source of pleasure for you, well and good. But if it’s an addiction or a response to the fear of something, seeing that fact is the first step to freeing yourself from what holds you prisoner. And if it’s a way to avoid interacting with people, maybe it’s time to stop shutting out the world–and depriving the rest of us of what you have to offer.

Please?

On the Hobby Lobby case and the meaning of “rights”

I didn’t manage a post yesterday, despite my Lenten goal. Owing to a change in my circumstances, it looks like it’ll be tough for me to find time to post on Thursdays for the next couple of months (which takes us right past the end of Lent and out of my Lenten posting scheme anyway). I’ll do what I can the other weekdays, though, including today.

Today I want to address an issue that’s been bugging me for years now and that enjoyed some time in the public spotlight this week. On Tuesday, the United States Supreme Court heard arguments in the case of Sebelius v. Hobby Lobby Stores, Inc. This case (along with a couple of related cases) revolves around whether privately owned for-profit companies can, on the basis of religious freedom, hold themselves exempt from the Affordable Care Act’s requirement that they provide their employees with health insurance that includes coverage for certain kinds of birth control. The ACA mandates that healthcare plans cover some twenty forms of birth control, four of which Hobby Lobby’s Christian owners find morally objectionable because they can in some cases prevent a fertilized egg from implanting and thus extinguish a human life that had already been conceived. Key questions include whether the birth control mandate places an undue burden on Hobby Lobby’s owners’ religious freedom; whether the government has a compelling interest in refusing to grant an exception; whether exceptions already granted to religious nonprofits should extend to for-profit companies owned by religious people; and whether and to what degree the religious convictions of a company’s owners constitute a basis for claiming that the company is entitled to religious freedom in certain matters.

There are important arguments to be made for both sides, even assuming cooler heads prevail (which, in the Supreme Court, sometimes they do). Meanwhile, in the commentariat and the blogosphere, some of Hobby Lobby’s supporters are making a more apocalyptic mountain out of this case than it really is, and some of its opponents are doing the same. A few articles, like this one, are trying to defuse some of the tension by correcting false assumptions by one or both sides, but on the whole the case has passions running high. In any case, a ruling is expected sometime in June, so we’ll have to wait until then to learn the Court’s decision.

But what bugs me about this case, and about plenty of other issues over the last several years, is one of the assumptions made by supporters of the government’s side. That assumption is that securing a person’s right to something–in this case, coverage for certain kinds of birth control–means using the coercive power of government to force a third party–in this case, a private employer–to provide that thing. The government is saying, in effect, “Your employees have the right to a certain kind of health coverage, and we’ve decided that it’s your responsibility to provide them with that coverage, so we’re going to make sure you do so; you don’t get to deny them their rights by refusing to provide coverage that meets our standards.”

Now, note that Hobby Lobby isn’t trying to prevent its employees from buying other health insurance, on their own, that covers the forms of birth control for which Hobby Lobby’s owners don’t want to pay. Nor is Hobby Lobby trying to prevent its employees from going to the store and buying those forms of birth control out of their own pockets. In other words, Hobby Lobby isn’t keeping its employees from getting access to those kinds of birth control. No, Hobby Lobby is arguing that it shouldn’t have to use its money to subsidize, through employee health insurance, forms of birth control to which its owners have a religiously-based moral objection. But Hobby Lobby’s opponents are still saying that its refusal to do so constitutes an attack on its employees’ right to coverage that includes those things.

Sorry, since when is a “right” defined as “something that must be provided for you by someone else”? Nobody argues that the right to bear arms means that the government–or your local sporting-goods store, for that matter–is required to provide you with a gun. Nobody argues that the right to free speech means that a newspaper is required to print your column. Defining these things as “rights” just means that the government can’t prevent you from doing these things on your own (though even then, courts have ruled that in some circumstances, like slander and libel, it can). In fact, several amendments in the Bill of Rights specifically enumerate things the government cannot do to you, like quarter soldiers in your house, search and seize your stuff unreasonably, or force you to testify against yourself.

So how did we get to the point as a society where we’re using the term “rights” to mean things that we want the government to provide or force others to provide for us? And how can we get back to using the term “rights” to mean simply freedoms that the government and others don’t get to take away from us? Because it’s long past time we did.

The World Vision dust-up and schools’ statements of faith

Anyone been following the World Vision kerfuffle this week?

On Monday Richard Stearns, the President of Christian aid organization World Vision U.S., apparently apropos of nothing, announced that the organization would allow itself to employ Christians in legal same-sex marriages (at least in its U.S. offices). The decision, he said, emanated from the organization’s desire to pursue Christian unity in helping the world’s poor over taking any particular stand on a divisive issue.

Predictably, Christians all over America, especially those from the more socially conservative Pentecostal and Evangelical churches from whom World Vision draws much of its support, took up arms. A few came to World Vision’s defense (including this article by Rachel Held Evans, which was the first I read of any of this business), but most slammed Stearns and World Vision for compromising the gospel, for sacrificing truth for some dream of unity, or for other such sins–you know, the standard accusations Christians in one group level against Christians in another group who are willing to work with people Christians in the first group don’t want to work with. (Billy Graham faced similar accusations, early in his career, from Fundamentalist churches who were happy to support his crusades until he made clear that he was willing to share a podium with Catholics and liberal Protestants.) Today, Stearns and World Vision announced that they were reversing the decision.

There are arguments to be made on both sides, certainly. There are plenty of faithful Christians who believe World Vision’s decision on Monday was the right thing to do, or at least a step in the right direction, and there are plenty of faithful Christians who believe World Vision’s reversal of that decision on Wednesday was the right call.

But today I want to focus less on World Vision’s actions than on Christians’ reactions.

First, while noone else quite matches its reputation for scope and effectiveness, World Vision isn’t the only game in town. Christians who want to support international aid and relief efforts, sponsor children and communities, and otherwise participate in work similar to World Vision’s do have other options. Both World Vision’s detractors and its defenders would do well to remember this point. Calls to stop supporting World Vision should be followed immediately by suggestions of other worthy organizations to support instead. Likewise, accusations that World Vision’s detractors would rather let children starve than support them through that organization should be tempered by the recognition that many (though probably not all) who pull their support from World Vision will direct it to another organization also doing good work, and by the recognition that World Vision’s policy change may–or may not–attract new support that offsets what was lost.

Second, and more importantly, this whole incident highlights a disturbing overemphasis on issues pertaining to homosexuality in American churches today, an assumption that a particular set of beliefs concerning the acceptability or unacceptability of homosexual behavior is a core part of the gospel message. Here’s Evans:

I have to ask: Since when? Since when has the reality that Christ has died, Christ has risen, and Christ will come again ever been threatened by two men committing their lives to one another? Since when have the historic Christian creeds, recognized for centuries as the theological articulation of Orthodoxy, included a word about the issue of gay marriage? Since when have my gay and lesbian friends—many of whom are committed Christians—ever kept me from loving God with all my heart, soul, mind and strength and loving my neighbor as myself? Since when has a single interpretation of the biblical passages in question here been deemed the only one faithful Christians can have? [Emphasis in original.]

On this point I agree very much with Evans, although to her last question I would add, “And to whom has God given the final authority to deem it so?”, since there are plenty of Christians with differing ideas who seem to think their deeming is the correct one.

A few years ago I was considering the possibility of trying to teach history at a Christian high school somewhere, so I started checking out various schools to see what kinds of things they required of their teachers. Nearly every school had a statement of faith which it required all its faculty to sign. Fair enough; parents who want to send their kids to a religiously-affiliated school are quite reasonable to expect that the teachers who will be filling their children’s heads with knowledge will do so in a manner consistent with the family’s and the school’s professed faith. And, much as I expected, there were some statements that I could sign in good conscience and some that, for a variety of reasons, I could not.

But what disturbed me most about those statements of faith was the astonishingly large percentage of them that included a line about the wrongness of homosexuality or the rightness of traditional heterosexual marriage. And what disturbed me about those lines was not so much the position the schools were taking on homosexuality, which was more or less the position I’d expect most of those schools to take. What disturbed me was that so many schools had decided that that one issue was so important that it belonged in the statement of faith at all.

Affirming the Trinity? Yes. The saving work of Jesus on the cross? Sure. The Resurrection? Absolutely. The authority of the Bible? Maybe, maybe not, depending on how you’re asking me to interpret the text. (You’ll notice I said there were a variety of reasons I couldn’t sign some of the statements of faith; a line insisting on six-day creationism was one of them.)

The definition of marriage as between one man and one woman and the sinfulness of same-sex romantic relationships? Um, sorry, how is that central enough to Christianity to even be on this list?

See, most of those statements of faith didn’t include a word about race issues, about poverty and economic justice (a topic Jesus himself addressed a lot more than sexual orientation), about violence on large scales and small, or about a host of other social issues. No, the one social issue they mentioned was homosexuality, and specifically the wrongness thereof. Is that issue–and only that issue–really so close to the heart of the gospel, the core of the Christian faith, that it belongs in a school’s statement of faith? Or anyone’s?

Well … no. Not anyone’s I’d be willing to work for, anyway.