Tuesday, November 11, 2008

Movie Violence and Assaults Paper

I've blogged about this paper before, but FYI for all the playaz, Dahl and Dellavigna's Does Movie Violence Increase Violent Crime? has been accepted for publication at the Quarterly Journal of Economics, one of our most prestigious journals (in the top 3). Here's what they find for those who don't remember from the last time I blogged about this interesting paper.
Abstract

Laboratory experiments in psychology find that media violence increases aggression in the short run. We analyze whether media violence affects violent crime in the field. We exploit variation in the violence of blockbuster movies from 1995 to 2004, and study the effect on same-day assaults. We find that violent crime decreases on days with larger theater audiences for violent movies. The effect is partly due to voluntary incapacitation: between 6PM and 12AM, a one million increase in the audience for violent movies reduces violent crime by 1.1 to 1.3 percent. After exposure to the movie, between 12AM and 6AM, violent crime is reduced by an even larger percent. This finding is explained by the self-selection of violent individuals into violent movie attendance, leading to a substitution away from more volatile activities. In particular, movie attendance appears to reduce alcohol consumption. The results emphasize that media exposure affects behavior not only via content, but also because it changes time spent in alternative activities. The substitution away from more dangerous activities in the field can explain the differences with the laboratory findings. Our estimates suggest that in the short-run violent movies deter almost 1,000 assaults on an average weekend. While our design does not allow us to estimate long-run effects, we find no evidence of medium-run effects up to three weeks after initial exposure.
So, to summarize, no it does not cause crime. It's a fairly strong paper in terms of its research design and tests, or at least looks as much. My one concern is that the NIBRS data does not include all states, and even from within states, not all jurisdictions. It specifically does not include any jurisdictions with a population greater than a million people (or at least it didn't in 2005; I haven't yet looked at the 2006 data). Does this matter? They also don't have box office data (they focus on daily ticket sales for all movies, violent and non-violent alike, from 1994-2005) by region. So in other words, they're basing this off the implicit assumption that the ticket sales are uniformly distributed, at least as far as I can tell. The people in Pokatown, Kansas (I made that place up) have the same demand for Saw as those in Nashville, TN. So I'm wondering if in the paper they really address this much at all. I mean, the NIBRS data is perfect for this kind of test. The NIBRS data has the following strengths:
1. Unlike the Uniform Crime Reports, NIBRS does not just assign one crime with one incident. UCR picks the worst crime and reports it, but NIBRS tracks all crimes associated with a single incident. So let's say that there is a bank robbery in which one of the perpetrators (let's say there's 3 perps) kills one person while one of the other perpetrators robs the bank. It's listed as a single incident, since they happened in concert, and only the murder is reported - not the robbery. This is, at least, my understanding of the "hierarchical rule". NIBRS reports: information on the offense(s) (even if there are multiple ones); information on offender(s); information on victim(s); information no property damage(s); information on arrest(s). I think it's roughly 6-7 dimensions. ... WHICH makes it a pain in the booty to work with, but nonetheless, there you go.

2. Unlike the UCR data, NIBRS records all offenses even if they don't actually turn into an arrest. So this is not just crimes, but a broad measure of violence, making it a perfect measure for severe physical violence that is believe to be the outcome of viewing media violence. Or at least the more severe forms of it. Smaller kinds of aggression that don't necessarily trigger a law enforcement visit are not recorded by NIBRS. But we may also think that those are not the ones that merit overt government regulation either.

3. The NIBRS data is going to tell you on what calendar day (e.g., January 2nd, 2004) and at what time of day (e.g., 8:00PM) that the offense occurred. UCR of course does not.
The tradeoff is that you gain such detailed information at a real cost, and that's the selection of some jurisdictions that choose not to report. The question is whether you think jurisdiction selection is going to contaminate the research design. Do you think the decision to report is correlated with the kinds of people who will or will not receive this "treatment"? Secondly, NIBRS does technically report city and state, but for some reason - probably b/c they don't have box office data at that level of disaggregation - Dahl and Dellavigna don't use it.

I am enthusiastic about this paper, but that does not mean I necessarily believe it. I read a paper the other day where a person randomly assigned expert opinions about wines to a grocery store shelf of wines and collected sales data for a summer. Those results I believe, even if I don't know if the question is of huge significance. This one, on the other hand, is plausibly of huge significance, but then I don't know if I really believe the results, per se. And that is because ideally you could be assigning the violent movie to some people but not others, observe the treated individual and the nontreated individual before the treatment, and afterwards, and then see if they went out and committed any violence. Here, the results are not quite so clean. But, it's the collection of evidence that is intriguing. Note how strong their results are, that they only show up for the hours during the movie and for a few days afterward, only for the really violent and not all the movies. It's the way this paper picks at the hypothesis, and presents a collection of results that has an intuitive explanation - the violent people went to see Saw instead of going out drinking. They are voluntarily incapacitated during Saw, and because they weren't drinking, they also don't go out afterwards and get into any fights.

It at least merits some serious, closer study. WHat's interesting is that this is the sort of thing that would never show up in a laboratory setting, but which might show up in a field setting because the field setting is going to allow for real decision-making. As they note in the paper, the lab studies are all partial equilibrium effects, but the real world is going to have something more like a general equilibrium. Anyway, check the paper out.

No comments: