Mimsy Were the Borogoves

Mimsy Were the Technocrats: As long as we keep talking about it, it’s technology.

Should the government (and the CDC) fund research into gun violence?

Jerry Stratton, February 14, 2018

Propaganda research: “You don’t trust government propaganda… but you want more government research?”; memes; government funding capture; scientific-technological elite; propaganda

I am becoming more and more convinced that government funding retards the advancement of science, not just by denying funding to lines of research that the bureaucrats decide can’t possibly go anywhere but also by focusing more on a bureaucratic consensus of what results are allowable than on a search for the truth wherever it leads. Science News reminded me recently of a blatant example of that from the nineties.

In For trauma surgeon, gun violence is personal in the November 11, 2017 Science News, Aimee Cunningham writes:

One roadblock is Congress, which has severely limited federal funding to study gun violence. “Why wouldn’t we want to know what the truth is and what the data show?” [Joseph Sakran] asks… “Everyone should want that.”

The real question is, should the government fund any research, or do political and bureaucratic considerations hold advancement back when the government gets involved?

The truth is, Congress has not limited federal funding to study gun violence. What Congress did was ban the Centers for Disease Control from using its funds to advocate or promote gun control. This is general U.S. policy, that government bureaucrats are not supposed to use taxes for political lobbying.1 The CDC remains free to fund whatever research they want, and as long as they stick to real scientific research, there will be no political will to cut their funding as happened in 1996.

But more importantly, what Science News is eliding is that there was a good reason for cutting the CDC’s funding back in 1996. People who want the truth and want to know what the data show do not want the CDC sucking funds away from real research into violence. Before the Congressional ban on politically-motivated spending, the CDC’s “research” into self-defense was never science. The very act of treating self-defense as a disease guaranteed that their research would be flawed. The CDC’s funding went to people who were known to promote the bureaucrats’ line, and who were known to interpret data far beyond what the data said. They then refused to make their data available to other scientists despite laws requiring publicly funded data to be made public. And as far as that goes, disclosure laws shouldn’t have mattered. The scientific method requires public data.

The scientific method is simple and unforgiving: you need a theory, and you need a replicable test or prediction that will falsify that theory. If your theory fails that prediction your theory is not true. If your theory’s prediction was met, your theory is not unproven. If that isn’t your method, then you aren’t using the scientific method, and you aren’t doing science. Further, if you don’t make your methods and data available for replication, you’re also not doing science.

The CDC never did science, at least in their research on gun ownership. All they did was waste taxes mining their data for anything that would support their political advocacy. Perhaps the most egregious example were the Kellerman series of studies, which culminated in showing that if you own a firearm, you’re more likely to be killed by a knife and that, therefore, you should get rid of that firearm. This was justifiably ridiculed as the “if you’re going boating, for god’s sake don’t wear a life preserver!” hypothesis. But even that gives it too much credit, because there was no hypothesis. The fund recipients just sifted and massaged the data until they found something that played to their preconceptions.

Not the truth, and not what the data showed. Just something that would play well with the bureaucrats at the CDC. CDC-funded research held back progress in reducing violence.

That trend of government funding retarding science in the field of gun-related violence continues today. You only have to look into the back issues of Science News to see it. They reported in 2016 that researchers had a theory: guns cause suicide. Therefore, there should be more suicides among soldiers in combat duty, because that’s when soldiers are allowed to have firearms. When they looked at the data, however, they discovered that soldiers with firearms (on combat duty) are less likely to commit suicide than soldiers without firearms (off combat duty). They refused to entertain the possibility that firearms are not related to suicide.

“The association between deployment and suicide is not as simple as we expected,” Nock says.

This is not science. Unless they rid themselves of their blinders, they will never find whatever was causing an increase in suicide among off-duty soldiers unless it is by accident; and they can’t remove their blinders unless they forego government funding.

And if they do, then like Prion researchers and others, they’ll be considered crackpots for a decade while people who could have been helped by their findings die. Because everyone else will be afraid to replicate their findings for fear of losing government funding.

Back in the nineties, when I was still unsure whether gun control was beneficial or harmful, it was the CDC’s biased reports that convinced me the anti-gun side were liars; it convinced me that they were anti-gun and not pro-science. Here’s an example of the kinds of “studies” that the CDC funded, from Arthur Kellerman. These were all high-profile, because their results matched what the government and many in the media wanted to hear. The first, limited to King County, set the tone for the rest, which were funded by Health and Human Services2 despite the extremely poor science of the first one.

Kellerman, 1986, Protection or peril? An analysis of firearm-related deaths in the home.

In 1986, Kellerman claimed that a firearm in the home made it 43 times more likely that the firearm would be used for something other than killing a criminal.

First, that’s a stupid claim. It’s like measuring the effectiveness of police officers based on how many criminals they kill. It makes no sense. And using Kellerman’s own data, you can calculate the same chances for non-gun-deaths vs. non-gun-protection methods. The ratio becomes 397/4, or 99 to 1. If you keep a non-firearm in your home, you are 99 times more likely to die than to kill a criminal.

So it’s just plain a stupid conclusion.

But, if you take a closer look at the study, it’s even more deceptive than that. For one thing, almost all of the deaths are the result of suicides. Suicide rates have no relation to firearms ownership, and studies—such as the military study I mentioned above—repeatedly show that access to firearms does not affect suicide rates.

So, take out the suicides and suddenly the number is only two to one. Remembering that while we’re still “measuring the effectiveness of police officers solely on the basis of the number of criminals they kill”, you might claim that this still supports the claim that owning firearms is more dangerous than not owning firearms. But, as it turns out, Kellerman undercounted self-defenses: he didn’t bother to check any further than whether or not an arrest and trial occurred. If an arrest and trial occurred, it was not counted as self-defense, whether or not the trial resulted in acquittal, and whether or not a conviction was successfully appealed. Here is the quote from Kellerman’s article:

Gunshot deaths involving the intentional shooting of one person by another were considered homicides.

Self-protection homicides were considered “justifiable” if they involved the killing of a felon during the commission of a crime; they were considered “self-defense” if that was the determination of the investigating police department and the King County prosecutor’s office.

All homicides resulting in criminal charges and all unsolved homicides were considered criminal homicides.

Note the complete lack of any mention of “courts” and “trials” in that definition of what he counted as self-defense.

Finally, even though this probably modifies the ratio in favor of self-defense, the very design of the study was flawed: it looked only at households where a death occurred. Would you study the usefulness of owning an automobile by looking only at automobile interactions where a death occurred, and then comparing that to other automobile interactions where a death occurred? It’s utterly silly: most automobile interactions do not result in an accident; most accidents do not result in death. It’s literally junk science. Any result from such a study would be complete nonsense if used in the manner the Kellerman study is used. This study says nothing about the relative dangers of having a firearm in a home.

Unfortunately, this is the way Kellerman tends to design all of his studies. It’s likely why the CDC decided to start funding him.

Kellerman, 1993, Gun ownership as a risk factor for homicide in the home.

In 1993, Kellerman’s number dropped from 43 to 1, to 2.7 to 1 (often cited as 3 to 1). This is often cited in the media as a family member is 3 times more likely to be killed by their own firearm than to use it in self-defense. But what it really said was that a person who kept a firearm was 3 times more likely to become a homicide victim by any means. Most of those means were not the firearms owned by the victim. An incomplete set of Kellerman’s data was made available temporarily, and what it showed is probably why Kellerman feared making his data public. It showed an upper limit of 34% as the number of victims who were likely killed by someone using the victim’s firearm. Kellerman didn’t give us that data; the 34% number assumes that all murderers that Kellerman classified as “intimates” lived in the same house and used the same firearm, something that is probably not true. But even with that high-end number, that 2.7 to 1 disappears and his data shows that it is safer to own a firearm than not.

Kellerman’s study has been compared to studying the efficiency of owning a lifejacket and relative chances of drowning. The same methodology would show that people who own lifejackets are more likely to drown than people who don’t. You are not going to increase your risk of drowning in the desert if you purchase a lifejacket, and you aren’t going to decrease your risk of drowning in the water if you throw out your lifejacket.

There are, in other words, at least two explanations for Kellerman’s results, even assuming that Kellerman’s study was reasonably designed:

  1. If you own a firearm, this magically causes people to knife you even when your firearm is in the closet upstairs, or,
  2. If your life is already in danger, you are more likely to keep a firearm in the home.

But this was not a well-designed study, or at least, it wasn’t designed to find the truth. Kellerman didn’t look at households randomly, he found households where homicides had occurred and then tried to find households that matched in all ways except that they didn’t own a firearm. But the very fact of a homicide occurring in a home introduces a serious self-selection bias, and his method of “matching” seemed designed to let that self-selection through. For example, he tried to match level of criminality in each household simply by asking about “arrests”. But this was merely a yes/no question, not a measure of severity. We don’t know, because Kellerman didn’t release the data, whether the control households contained a number of pot-smoking arrests compared to a similar number of rape arrests in the homicide households. But we already know by the design of the study that the level of crime occurring in the homicide households at least included homicide. It is unlikely that the level of crime in the other households exceeded that level of severity and very likely that it was less than that level.

Similarly, in the households where homicides had occurred, he’s asking questions about someone else. In households where homicides did not occur, he’s asking questions about the person he’s talking to. If of the 388 control households only 30 lied and said that they didn’t have a firearm, his numbers disappear completely—even ignoring the bit about whether the homicide involved the victim’s own firearm. This is because the “real” number was not 2.7 to 1, it was 44% vs. 36%, or about 1.2 to 1. The 2.7 to 1 ratio came about only after massaging the data to try and remove confusing risk factors. This is a very dangerous way to extract data from a population.

It is very likely that people in the control group refused to report owning firearms. According to Rafferty, Ann P. et. al. in “Validity of a household gun question in a telephone survey”, 10.3% of hunters and 12.7% of people who had registered a handgun denied household ownership of a firearm in interviews. If there is a more than 7.7% noncompliance rate among the control group, Kellerman’s numbers disappear; in the homicide group this problem is less pronounced if it exists at all, because those households underwent a police investigation, and the police reports noted the presence of any firearms.

This is, simply, junk science, and exactly the kind of “research” that the CDC loved to fund for purposes of providing media quotes, before it had its hands slapped down by Congress.

Kellerman, 1998. Injuries and Deaths Due to Firearms in the Home.

In this 1998 study, Kellerman tried to mask the problems of the 1993 study by excluding non-firearms homicides and injuries. But he still included assaults in which the firearm was not the firearm owned by the victim. This study covered 438 gunshot firearms assaults which resulted in at least woundings. Of those, 49 involved a firearm kept in the home where the shooting occurred. 295 involved a firearm brought to the home from elsewhere. And in 94 of those, the origins of the firearm were not available.

Well over half of those injuries were not the result of the homeowner’s firearm. Again, Kellerman, to the extent he has a theory at all, is claiming that if you are on a boat, get rid of that lifejacket!

Good research matters

By all outward appearances—and that’s all we have because they were secretive with their data—what the CDC researchers appeared to be doing was crunching the numbers in their data in different ways until they found a result that pleased them. This is the polar opposite of science. This is why taxpayers don’t trust the Centers for Disease Control to do firearms research. Because they can’t be trusted to do science in that field.

As Eisenhower noted in his farewell address, government funding by its nature swamps independent research. Even today the general discussion ignores real results that could be used to make real reductions in violent crime in favor of decades-old bad science. Even today, researchers and/or science writers are afraid to note the obvious: that when your theory fails its predictions, it is time for a better theory.

Government funding always ends up being politically-tainted to some degree, and the CDC was worse than most. When someone tells you that we should end the non-existent ban against the CDC performing gun research, what they’re really saying is that they want the CDC to be free again to engage in political campaigning in favor of gun control. And continue letting people die from junk science.

The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded.

Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite. — President Dwight D. Eisenhower (President Dwight D. Eisenhower’s Farewell Address)

In response to The plexiglass highway: Government bureaucracies can cause anything to fail, even progress.

  1. The general form of this is the rarely-enforced Hatch Act of 1939.

  2. Most likely, the CDC, which is part of the HHS.

  1. <- Government cancer
  2. Government Funding Disorder ->