Reason magazine published a video essay this week on the state of gun control research:
The accompanying article sums it up like this:
There has been a massive research effort going back decades to determine whether gun control measures work. A 2020 analysis by the RAND Corporation, a nonprofit research organization, parsed the results of 27,900 research publications on the effectiveness of gun control laws. From this vast body of work, the RAND authors found only 123 studies, or 0.4 percent, that tested the effects rigorously. Some of the other 27,777 studies may have been useful for non-empirical discussions, but many others were deeply flawed.
We took a look at the significance of the 123 rigorous empirical studies and what they actually say about the efficacy of gun control laws.
The answer: nothing. The 123 studies that met RAND’s criteria may have been the best of the 27,900 that were analyzed, but they still had serious statistical defects, such as a lack of controls, too many parameters or hypotheses for the data, undisclosed data, erroneous data, misspecified models, and other problems.
And these glaring methodological flaws are not specific to gun control research; they are typical of how the academic publishing industry responds to demands from political partisans for scientific evidence that does not exist.
It’s worth reading and watching the whole thing. It gets deep into the technical specifics of where the conventional wisdom — that there are sound public health conclusions about gun ownership — goes astray. The most notable bit is about how in the aggregate, given the 0.05 p-value that journals standardize on, the studies produced positive results at about the same rate that you’d expect from random chance:
In terms of the gun control studies deemed rigorous by RAND, this means that even if there were no relationship between gun laws and violence — much like the relationship between drinking milk and getting into car accidents — we’d nevertheless expect about five percent of the studies’ 722 tests, or 36 results, to show that gun regulations had a significant impact. But the actual papers found positive results for only 18 combinations of gun control measure and outcome (such as waiting periods and gun suicides). That’s not directly comparable to the 36 expected false positives, since some combinations had the support of multiple studies. But it’s not out of line with what we would expect if gun control measures made no difference.
Also concerning is the fact that there was only one negative result in which the researchers reported that a gun control measure seemed to lead to an increase in bad outcomes or more violence. Given the large number of studies done on this topic, there’s a high statistical likelihood that researchers would have come up with more such findings just by random chance. This indicates that researchers may have suppressed results that suggest gun control measures are not working as intended.
We won’t dive into that here, Reason has it covered. What we can talk about here is why this happens.
If you want to predict what kinds of studies will get published, look at which studies will get rewarded. Incentives determine outcomes, because they define the direction in which an ecosystem’s selective pressures will push. And to understand which studies get rewarded, you have to understand how they get used. What demand are they fulfilling?
Academia is theoretically a disinterested search for truth. And sure, many of the people working within it do genuinely approach their work that way. But whatever accolades and funding they’ll be rewarded with for their statistical rigor pale in comparison to the returns on having a study hit big in the mainstream press. In that domain, statistical rigor doesn’t matter. What matters is a splashy headline.
The reason that matters is that you can’t (well, shouldn’t) go on the news or go to a cocktail party and state your opinion as fact. What you can do — and in fact looks quite smart when you do it! — is to say, “Did you know that studies show <result that happens to align with my preexisting preference>?”
This creates heavy selective pressure towards studies that reinforce that preference, and away from those that disconfirm it. That pressure can be so strong that it turns an entire field from a scientific enterprise into an emergent preference laundering machine. Preexisting preferences and grant money go in one end, peer-reviewed studies with ≤0.05 p-values come out the other end. Crucially, this doesn’t require any ill intentions among the individuals in the field. They can all be working earnestly on well-designed studies. It’s simple publication bias: the ones who luck into the “right” results will get published, and the ones who don’t won’t.
The power of preference laundering is that it turns a group of people’s personal preference into the “official” answer for what smart people are supposed to think. Back in “OSD 110: What smart people are supposed to think”, we wrote about this effect in light of a New York Times news digest about guns that was full of basic factual errors:
Yes, all the facts in the NYT newsletter about guns are wrong. But that’s the least interesting thing about it.
After all, this is the internet. Everything’s wrong all the time! That’s the whole fun of it! Being upset that someone on the internet is wrong is like being sad that there are waves in the ocean. The whole point is that it’s turbulent, and sometimes the waves crash into each other and make cool shapes. The sane thing is to do some boogie boarding and build a little sandcastle and wear sunscreen and have a Coke and just hang out for a little bit, and don’t get worked up about the shape of some particular wave.
What is interesting about the NYT newsletter is the meta-game that it’s playing. We’ve talked many times how cultural disagreements are a tussle about what is normalized vs. what is stigmatized (e.g. in OSD 80, OSD 99, and OSD 107).
What’s really going on in the NYT newsletter has almost nothing to do with content itself. The “facts” literally don’t matter, as you may have observed if you’ve ever noticed how useless facts are at changing people’s opinions on this stuff. So the newsletter could have said pretty much anything. The point isn’t to communicate information, the point is to draw a box and label it, “This is what smart people believe.” Once they’ve made that box, it doesn’t matter much what goes into it. If people want to be seen as smart, they’ll sign up for whatever’s in the box.
Our Twitter bio is “Making people proud to care about gun rights.” That’s because we’re accelerating a trend: people are starting to poke at what they’ve been told thoughtful people are supposed to think about guns.
A preference laundering machine is powerful, but it’s also fragile. Because it hinges on monopolizing “this is what smart people are supposed to think”, it’s vulnerable to any hint that it is in fact built on a foundation of basic mistakes.
So this doesn’t require vitriol or name-calling or an us vs. them approach. All it requires is for you to point out, “Hey, you care about getting things right. That’s awesome. So here’s what you haven’t been told about guns.”
This week’s links
Playing with a WWII-era infrared scope
Mounted on an M1 carbine.
Bolt-action rifles for home defense
Some highly on-brand Paul Harrell content. Home defense with a .338 Win Mag.
Alabama Senate approves bill nullifying some federal executive orders related to guns
Legally these state-level acts are often purely symbolic, for a couple reasons. First, states often make them only apply to future laws or, as in this case, only make them apply to cases where the state won’t be at risk of losing federal funding. Second, states are legally free to not cooperate with federal law enforcement, but courts do not allow them to actively override federal law.
But all that said, this trend is a good reminder of the themes from this OSD piece from a while ago, “Collateral damage, race, and the Virginia gun control bill”.
OSD office hours
If you’re a new gun owner, thinking about becoming one, or know someone who is, come to OSD office hours. It’s a free 30-minute video call with an OSD team member to ask any and all your questions.
Support
Like what we’re doing? You can support us at the link below.
Merch
And the best kind of support is to rock our merch and spread the word. Top-quality hats, t-shirts, and patches with a subtle OSD flair. Even a gallery-quality print to hang on the wall.