Last week was momentous at the Supreme Court. On Thursday the Court upheld the legality of subsidies nationwide, as the key feature of the Affordable Care Act (ie “Obamacare”). On Friday the Court ushered in nationwide marriage equality. I am very pleased with the outcome of both cases. Let's examine each in turn.
1. King v. Burwell. This was the second attempt to invalidate the Affordable Care Act, after the first one failed in 2012. This time proponents claimed that the phrase “exchange established by the State” means subsidies are only lawfully available to the residents of individual states that have established health care exchanges. If you do not reside in such a state, tough luck as your premiums rise.
This is a preposterous position, as I asserted in April. Precision in language is one thing. Pedantic hair-splitting that is divorced of all context and logic is another.
Chief Justice John Roberts agrees. Writing masterfully in the controlling opinion, he makes several essential points: “[W]hen deciding whether the language is plain, the Court must read the words ‘in their context and with a view to their place in the overall statutory scheme.’”...”Here the statutory scheme compels the Court to reject petitioners’ interpretation because it would destabilize the individual insurance market in any State with a Federal Exchange, and likely create the very ‘death spirals’ that Congress designed the Act to avoid.”
Hear hear. In his dissent, Justice Scalia accuses the Court of making up new meanings to words while stepping out of its lane. In fact, this is what Justice Scalia is doing in this (thankfully) failed attempt to usurp the authority of the people’s elected representatives.
2. Obergefell v. Hodges. This 5-4 decision ushers in marriage equality. (King v. Burwell was 6-3). Writing for the majority, Justice Kennedy is eloquent: “The first premise of this Court’s relevant precedents is that the right to personal choice regarding marriage is inherent in the concept of individual autonomy.”...”[T]he right to marry is fundamental because it supports a two-person union unlike any other in its importance to the committed individuals.”...”Since same-sex couples may now exercise the fundamental right to marry in all States, there is no lawful basis for a State to refuse to recognize a lawful same-sex marriage performed in another State on the ground of its same-sex character.”
So moved and so ordered. #lovewins
That stipulated, it is worth taking a moment to ponder Chief Justice Roberts’s dissent (which is by far the most principled of the four dissents). In King v. Burwell the Roberts prevented usurpation of the wishes of Congress. In Obergefell v. Hodges he fears that the Court is short-circuiting a democratic process that is likely to result in support for same sex marriage anyway. We must acknowledge his deference to the justice of same-sex marriage: “Petitioners make strong arguments rooted in social policy and considerations of fairness. They contend that same-sex couples should be allowed to affirm their love and commitment through marriage, just like opposite-sex couples. That position has undeniable appeal…”
On the other hand, “[T]his Court is not a legislature. Whether same-sex marriage is a good idea should be of no concern to us. Under the Constitution, judges have the power to say what the law is, not what it should be.”
The Chief Justice is making a process point, not an ideological or homophobic one. As Emily Bazelon points out, there is some merit to his claim. But in the end I disagree with the Chief Justice. Equal protection under the law is part and parcel of the Constitution. That position has undeniable appeal too.
Last week, after Dr. James Billington announced his retirement as the Director of the Library of Congress (LC), I argued that LC is better positioned to digitize books than Google.
Morally and culturally better positioned, that is. Unlike at Google, which has an inescapable profit motive behind its book scanning program, LC exists to serve the public interest.
In addition to serving the public interest, a massive monograph scanning project spearheaded by LC would honor Dr. Billington's legacy of spurring digitization efforts. As his official biography notes, "Dr. Billington has championed the Library of Congress’s National Digital Library program, which makes freely available online millions of American historical and cultural documents from the vast collections of the Library and other research institutions on the Library's website at www.loc.gov. These one-of-a-kind American Memory materials and the Library’s other Internet services -- including the Congress.gov congressional database and information from the U.S. Copyright Office -- are widely used in K-12 education."
This is an outstanding legacy, although it must be said that what's been digitized so far is the "low hanging fruit" of public domain or historic documents. That was the right place to start, and now we can go further. In the Washington Post, Philip Kennicott offers this suggestion for the next Librarian of Congress: "Every word on every page of every public-domain book sitting in the Library of Congress should be available online. That’s a lot to ask, but why ask for less?"
I agree with Kennicott, and would take it even further. The reason he stops at public domain materials, of course, is because digitizing anything that is still in copyright would lead to protracted legal battles of the kind that Google endured (I always supported the legality of Google's efforts, if not their cultural legitimacy.) But we must remember that, in the United States, the only items which are squarely in the public domainwere published before 1923. There are some public domain materials after this, but as a matter of exceptions and technicalities rather than law.
So Kennicott's proposal would not account for a good deal of material published in the last 92 years. This is especially true for items published in recent decades.
The nexus of the problem is the nation's antiquated copyright law, which was last substantially revised in 1976. This may have been a bicentennial year, but it was also well before the widespread use of the Web. Our codified notions about how to incentivize authors, and protect their rights, do not jive with how knowledge and information flow today.
In a nutshell, then, we need to fix copyright. (That's much easier typed than achieved, of course. The current system benefits incumbents such as publishers, so any widespread changes would be mightily resisted.) And who manages the US Copyright Office? Why lo and behold, it's a division of the Library of Congress. Ultimately, copyright is the responsibility of the Librarian of Congress.
So we have our mission for whomever President Obama appoints and the Senate confirms as the 14th United States Librarian of Congress: digitize our books, and modernize our copyright law.
To my mind the arguments from the Authors Guild about copyright infringement were spurious and short sighted. Google posts snippets of in-copyright works, leading to new markets for generally obscure texts. This is very much biting the hand that feeds you.
And yet...and yet. There IS something unsettling about Google--a commercial entity with commercial aims--assuming this responsibility for digitizing such a prime piece of our cultural heritage. Google can claim it does no evil, but it does want to get rich.
And so, now that James Billington has announced his retirement as the Director of the Library of Congress, we have a moment. LC is better than Google, and LC should be leading this effort. Over the next weeks and months I'll be writing more about this. Dr. Billington retires in January, there is time to make the case.
The latest imbroglio in libraryland concerns Elsevier's updated policy for depositing articles into institutional repositories (IRs). For many years authors in Elsevier journals could deposit their articles into IRs immediately upon acceptance, now they will have to wait to do after the embargo period has lapsed (which is in many cases more than 12 months.)
Obviously this move is to protect Elsevier's subscription revenues, and is what a rational capitalistic business should do. If immediate deposits to IRs represent--or could potentially represent--a threat to subscription revenues, the only thing to do is to ban such deposits. Elsevier's claims that, in fact, this is all about "unleashing the power of academic sharing" is the thinnest and most unconvincing of window dressings. The people behind the Coalition of Open Access Repositories were certainly not convinced, as they responded to Elsevier with a successful petition campaign to challenge their policy.
Long-time open access advocate Michael Eisen hit the nail on the head by arguing that there is a fundamental incompatibility between using business-minded subscription journals to achieve the social good of open access. Eisen's rhetoric about the venality of publishers becomes tiresome, and he understates the work involved in stewarding the scholarly record. But that's all rhetorical flash, easily dismissed. On the logical incompatibility between open access and business imperatives, he is spot on.
What's to be done? As ever, the root source of change lies in the academic reward system that currently values publication in subscription journals. This value system percolates into public policy, which also rewards such publications by permitting embargo periods and unquestioningly trusting the integrity of the peer review system. Our sense of what constitutes a publication, and what constitutes reward and evaluation, can only change in the academy. For one worthwhile perspective on how things should change, check out this post by Dr. Marius Buliga.
Brian Koppelman's recent interview with Bryan Garner, the author of Modern American Usage, resurfaced the unresolved tension in linguistic circles between prescriptivists and descriptivists. Garner is an unapologetic prescriptivist, willing to issue judgment about both correct and felicitous word usage. Linguists such as John McWhorter and Steven Pinker are descriptivists, suspicious of bright line edicts and preferring instead to observe how people speak without judging the correctness of what they say.
Garner is an attorney with a long career in guiding lawyers in how to write more clearly. He came to general awareness in 2001, with the publication of David Foster Wallace's essay "Tense Present" in Harper's. (Wallace later republished this essay in Consider the Lobster.) I re-read this piece last evening, and it is as fresh and brilliant as ever. In what is at first blush a review of Garner's erudite tome Modern American Usage, Wallace lays bare the ideological and human stakes underlying the debate between presciptivism and descriptivism. It's well worth reading, including the digressions which Wallace advises readers to skip.
At root, prescriptivism in normative. You don't say "I ain't going." You do say, "I do not plan to attend." You don't observe, "He be trippin'," rather you proclaim, "That gentleman is momentarily indisposed due to the ingestion of a mind-altering substance." And so forth.
We all know this. And we all know that how people speak influences how others think of them. This may not be fair, but it has ever been so. The entire premise of My Fair Lady is about teaching a poor woman to speak differently, so that she may become a lady.
But who makes these rules and why should anyone obey them? They weren't passed by any legislature, and looking down one's nose at others about how they speak seems like a particularly tragic use of a fine education. Furthermore, standard English is not always elegant or concise. Sometimes it's just stuffy.
Garner's in on this racket, say the descriptivists. From his fancy perch he issues edicts and disrespects the vernacular language of marginalized people.
Hold on! says Wallace (defending Garner). All language is normative, and there is no way around it. There is always a dominant form--think of China, where Mandarin is the official language even though there are countless regional dialects. There has to be a base, there has to be a standard. So if Garner's brand of English usage fades away, another dominant approach will arise in its place.
Indeed it is true that dominant linguistic standards are enforced by privileged people, but this does not mean it is unwise to learn them. The way to get ahead is to learn the speak that privileged language, which means that descriptivists are actually harming the people they claim to support.
Both in his recent interview and in his 2004 essay "Making Peace in the Language Wars," Garner shies away from the sensation of disrespect that his brand of prescriptivism engenders. It lands as one more tool of oppression, even though what Garner suggests regarding proper word usage could be the key to a changed life.
We can be respectful of people's sensitivities without arguing that prescriptivism has no value. That's not true. Wallace notes that those who seek to make change, such as Mahatma Gandhi or Martin Luther King, always speak the language of people with power. It's the only way to get them to listen.
For the last several months I've been reading George Eliot's masterpiece Middlemarch, as part of the year long "Mission Impossible" project sponsored by the Evanston Public Library. I wrote about Middlemarch last fall, and since then have almost finished but not quite yet. It is so engrossing that I am no longer sticking to the assigned readings, as it is more rewarding to race ahead.
One author I've always meant to read but never have is Anthony Trollope. That finally changed after absorbing Adam Gopnik's tribute to Trollope a few weeks ago. Gopnik's enthusiasm prompted the purchase of Trollope's Phineas Finn, the coming-of-age story of a rural young man who assumes a seat in Parliament. (A purchase which occurred at the Seminary Coop, one of the most glorious of Chicago's bookstores.)
This particular Gopnik sentence sings: "What makes Trollope a novelist rather than a polemicist is that, although he is on the side of reform, he is capable of empathetic engagement with its victims." Trollope's quest for reform is very specific to the political conditions of Britain in his lifetime. But the writer's imperative for empathy is universal.
The novel is the art form best suited to plumbing the depths of the human psyche, as it allows for a degree of interiority and exposition that is harder to achieve in other art forms. We are indeed living in a golden age of television, and I will miss Mad Men greatly. Nonetheless we will always need novels.
One ostensible truth about human nature is that, upon being presented with objective and accurate data about any given matter, people will dispassionately observe that evidence and follow wherever it leads. If it leads away from a previously held belief, so be it. We'll go happily towards the truth, skipping along the way. This belief is the raison d'etre of evidence-based medicine, evidence-based librarianship, evidence-based anything.
Of course, this is not the way humans actually operate. At all.
When faced with evidence that challenges what we believe already, instead of incorporating that evidence into our belief system we are much more likely to discount it. We can do this in several ways: simply ignoring the contradictory evidence entirely, or co-opting it so that it settles into what we already believe. As Maria Popova notes, "Our minds simply prefer explanations that take less effort to process, and consolidating conflicting facts with our existing beliefs is enormously straining."
And that brings me to "Deflategate," the latest scandal in the world of sports. Last week the NFL released a report claiming it was "more probable than not" that Patriots quarterback Tom Brady was "generally aware" that the footballs he used in the AFC championship game had been improperly deflated.
Those hedging phrases are gold mines for people on both sides of this argument. Patriots fans can claim that "more probable than not" leaves a lot of wiggle room. This implies that more declarative phrasing, such as "Tom Brady cheated and lied," would have satisfied them. Of course such clarity would not have appeased any Patriots fan worth their salt; it would merely have inspired the same sort of derision we've already seen.
Cue Popova, once again: "Our minds simply prefer explanations that take less effort to process, and consolidating conflicting facts with our existing beliefs is enormously straining."
The balance of the country, which does appear genuinely aggrieved by the success of the Patriots, is apt to see phrases like "more probable than not" and "generally aware" as nothing more than that fuddy-duddy talk lawyers do. It boils down to this: cheaters gonna cheat. Caveats begone! Context, who needs you?
Time for yet another recitation of Popova: "Our minds simply prefer explanations that take less effort to process, and consolidating conflicting facts with our existing beliefs is enormously straining."
Here we are then. I "generally align" with the anti-Brady camp, because I don't like Bill Belichick and think Tom Brady is an ass. But in an attempt to be as open-minded as human nature allows, I agree with Seth Stevenson that a four game suspension is wildly excessive, and would not have been levied on any other team which did the same thing. (And yeah, the evidence in the report is strong enough for me to think that Brady knew what he was doing and got caught doing it. This was a preponderance of evidence case.)
Sure, much of this is the easy theater of an NFL commissioner attempting to play tough by sacrificing a juicy target. I'll grant you that, New England. But Brady cheated, and you know that in your heart of hearts. I'm all for advocating consistent and proportionate punishments across the league once you grant that in return.
A few months ago data scientists at Google developed a proposal for ranking search results by the trustworthiness of the sources. This would be an alternative to the current approach, which ranks sources by popularity (ie, the number of other web sites that link to the sources you see in your results feed).
As Google's team put it, "We propose a new approach that relies on endogenous signals, namely, the correctness of factual information provided by the source. A source that has few false facts is considered to be trustworthy...The facts are automatically extracted from each source by information extraction methods commonly used to construct knowledge bases. We propose a way to distinguish errors made in the extraction process from factual errors in the web source per se, by using joint inference in a novel multi-layer probabilistic model." (Bold mine, for reasons to be argued momentarily.)
In a smart Slate commentary, David Weinberger and Dan Gillmor offer a qualified endorsement of this proposal. They note that it is attractive compared to the rudimentary popularity ranking that currently constitutes the Google method. Ranking by trustworthiness has value, as long as Google never becomes the arbiter of what is true. Determining truth claims in various fields should be left to the practitioners of those fields, in the process of scholarship and exploration that has long defined the search for knowledge. In other words: Google should remain a secondary rather than primary source, no matter how sophisticated its search algorithms become. As Weinberger and Gillmor note, "Google is not smarter than the experts in science and other disciplines who are engaged in continuous, well-founded, evidence-based arguments."
That all makes sense, even though we can quibble about how much ground to cede to "experts" in our democratic, Wikipedia-article-producing era. But that's a topic for another post.
For now, back to the bolded section: "The facts are automatically extracted from each source by information extraction methods commonly used to construct knowledge bases. We propose a way to distinguish errors made in the extraction process from factual errors in the web source per se, by using joint inference in a novel multi-layer probabilistic model."
One of the long-prized skills for librarians is the ability to guide people to trustworthy sources. This can happen in multiple ways--either a direct and straightforward referral to a particular source, or (hopefully) via an instructional session that provides people with tools for evaluating the trustworthiness of sources they find on their own.
In either case, the librarian is the filter for trustworthiness.
If the Google team's proposal goes forward, there would be less need--perhaps eventually no need--for librarians to serve as such a filter. Assuming that most information seeking begins with a Web search, Google's routing to reliable sources would serve effectively to get people started. (Many sources will not be available to all readers, unless they are open access—academic librarians will still need to integrate their holdings into Google searches for the benefit of their communities). Of course, the identification of sources is just the start of a research project. People still need to evaluate the credibility of the sources for themselves, and then synthesize their understanding into new knowledge.
In general, these phases of evaluation and synthesis have not been the province of librarians. We have been concerned with selecting, organizing and making available quality sources. If this phase becomes more ubiquitously handled elsewhere, whither librarianship?
I argue for two actions in a “Google trustworthy sources” era: concentrate on helping people synthesize and evaluate the content they locate, moving into a more pedagogical vein; and intensify our focus on collecting, curating and preserving the unique content of our own institutions. The first action would be a worthwhile stretch, the second would allow us to apply familiar and unique skills. Both approaches would demonstrate the continued vitality of librarians in the digital age.