blog/show

Some Thoughts on Methodology

I am grateful for the opportunity to participate in this conference on the topic of corpus linguistics and the Second Amendment. However, the timing is a bit unfortunate as my co-author (Josh Blackman) and I are not quite ready yet to make public our findings (hopefully this fall). And we have learned that making findings public prematurely can lead to a misunderstanding of their implications. This post will thus focus more on methodology and provide just one finding from our current research—regarding an erroneous claim by Justice Stevens.

The Second Amendment is a hotly debated and, for most, passionately felt topic. (I may be an outlier as I don’t have strong feelings on the Amendment’s substance as a policy matter, though I do have strong feelings on constitutional interpretation.) Because of these passions, the likelihood of succumbing to motivated reasoning and confirmation bias are strong. First, a quick survey of these two related psychological tendencies. Motivated reasoning is where one reasons such that one’s conclusions are consistent with one’s pre-existing loyalties or views. The classic example of this was showing video of a disputed referee’s call in a college football game between Harvard and Yale. Harvard and Yale students viewed the play completely different from each other and entirely consistent with their loyalties. And research has repeated shown that whether it be politics or views on anything, the more strongly held a position, the more one is likely to reason in such a way as to reach a conclusion consistent with that position.

Confirmation bias is motivated reasoning’s cousin. It is where we process information in a biased way such that we focus on some information but ignore other information in a way that is consistent with our values or views. Thus, one is prone to ignore evidence that counters one’s pre-determined position, but fixate on evidence that supports that position.

Given that few scholars (or jurists) come to the Second Amendment without strongly held views—including possibly writing or litigating on the topic in the past—the likelihood of succumbing to one or both of these psychological vices is high. And that is compounded by the fact that much of the corpus linguistic analysis of these historical materials is qualitative in nature: reading through search results to determine what sense of a word or phrase is being used. It is thus human nature to process such evidence in a way that conforms to one’s strongly held, pre-existing views as to what the Second Amendment means. Hence, this tendency undermines some of the objective and empirical benefits of corpus linguistic methodology.

All is not lost, however. There is a way to try and reduce this naturally occurring bias—a best methods that anyone can implement—though it takes more work. We can import methodologies from other fields that do content coding. We see this a lot in media studies where scholars are trying to quantify the qualitative, such as the content of newspaper articles or books or television programs. And the gold standard here is a type of double-blindedness, though not the same double-blind set-up of experiments.

First, you want multiple coders and you don’t want the study’s author to be one of the coders. Thus, for our research, my co-author and I did not perform ourselves the categorization of the competing senses of various Second Amendment terms, like bear arms or keep arms or the right of the people. In that way we did not infect the analysis with our own biases. Instead, we followed some of the practices that I and Jesse Egbert have explained elsewhere and had others do the classification. And we had them do the work independent of each other.

Specifically, law students at various schools around the country performed the analysis based on a guide we provided them. This is standard methodology in the social sciences with content coding, but not so common in the law with its foundations more in the humanities (history and philosophy). These coders should be blind, so to speak, in two ways. First, they should be blind to the opinions or hypotheses of the person they are doing the coding for. You can imagine that if you tell your coders that you really think the Second Amendment means X, they will be more likely to find that the data supports this. So we don’t want to prime them. What I do is tell coders that I don’t really care what they find, I just want them to be as accurate as possible.

The second way you want coders to be blind is you want them to work independently of any other coders. So it helps if they don’t know who the other coders are so they can’t talk to each other and influence one another. Rather, you want them to be following the coding rubric you have provided and just making decisions based on that and their own judgment. With multiple coders, then, you can measure how much the coders agree with each other. While there are various sophisticated statistics to measure inter-coder agreement, I try to keep it simple and gravitate towards just reporting the percentage of agreement between coders.

Of course, you could get coders who wildly disagree. That could be due to one of a few reasons. One is that the data are very ambiguous. One way to help with that is to, besides providing a category for ambiguous, to instruct coders that if they code something as ambiguous, they are to then make a second selection if they are leaning towards a particular sense. People have different comfort levels with ambiguity, so some will be more likely to mark things ambiguous if they have doubt, while others will select a category even with some doubt. Having a second selection when someone first codes something as ambiguous helps with that.

In short, these methodological measures help ensure that one’s results are more likely to possess the twin pillars of good social science research: validity (or accuracy) and reliability (or reproducibility).