Author: Gershon Ben Keren
There is nothing like a statistic to make a point, even if that point is obvious without one. I’m a big fan of statistics and tend to enjoy quantitative research, far more than qualitative; that doesn’t mean that I don’t see the value of research that employs non-quantitative methods, but at the macro/”Big Picture” level, I like to have a “number” that is more significant than chance to explain a set of actions and behaviors etc. However, when such a number/statistic exists, it is not enough to merely accept it, even when it appears fairly conclusive e.g., 64% of street robberies in Boston don’t involve the presence of a weapon i.e., they are strong arm affairs etc. At first glance this may suggest that physical resistance may be a viable strategy, however what the statistic doesn’t tell us, if the person committing the mugging is concealing a weapon, and resistance would escalate the incident to a point where they draw and use it. The statistic also doesn’t show if there is more than one assailant involved, and on its own doesn’t include any information on the victimology e.g., age, gender etc., so simply basing a strategy of how to respond to a street robbery in Boston, based on this statistic alone would be a dangerous way to go. In this article I want to look at some of the issues that different statistics may have and how we should factor these things in when trying to understand and make sense of them.
One of the first things to understand when looking at a statistic(s) is the source of the data from which its drawn e.g., the 64% of street robberies is drawn from Boston police incident reports, over the last 10 years. Even though the data source is official and “credible”, there is still a lot of potential issues with it, and it may not in fact give us the best understanding of what street robberies in Boston look like. Firstly, not all street robberies will be reported e.g., if you lose $20 in a robbery will you take the time to report it to the police, especially when there is little chance of the perpetrator getting caught, and you getting your money back? We know from victim surveys that many of these types of “petty” crimes aren’t reported and that it is property crimes such as burglaries and auto thefts where a police incident report is required for insurance reasons where victims are more likely to involve law-enforcement. It could also be that this figure is lower than what it should be, if those targeted are more likely to make a report if a weapon is involved e.g., an individual judges the incident to be both more serious, where they believe their life may have been at risk, and that they had little choice but to acquiesce etc. This may also skew the figures when the gender of the victim is considered, with men feeling more self-conscious about reporting an unarmed mugging – where they may feel that they would be expected to resist – than women etc. Another potential issue is that of reporting and recording offenses. Law Enforcement Officers record and categorize offenses primarily for the purposes of prosecuting an offender, not for analyzing crime. This is probably not so significant for a crime such as robbery, where the component parts are quite straight forward - property must be taken, and force or the threat of force used to do so – however for other crimes defining and classifying them may be somewhat more complex.
Some statistics may come from data that has been aggregated to a level where it is practically meaningless other than recording general trends. The FBI’s Uniform Crime Reporting (UCR) program takes data from around 18000 different agencies and produces reports which show crime trends over time. It would be easy to take a cursory glance at such reports and deduce that the U.S. is becoming a safer place to live, and that your chances of being victimized are low, however crime occurs locally and so such statistics aren’t necessarily directly relevant to you e.g., if somebody lives/works in a high crime locale where violent crime is on the rise, it matters little if it is falling nationally etc. There is also the danger when aggregating data of creating an ecological fallacy, where you draw conclusions about individuals based on the group data e.g., you believe that the majority of muggers don’t use weapons because the group data suggests that, when in fact the majority of the street robberies you are looking at were committed by a small handful of muggers who acted in this way – whilst the majority were in fact armed. When we start to aggregate data, we need to be careful about assuming what holds true for a group, may not hold true for the individuals in that group. This is one of the major dangers of looking at top-level/macro data, and assuming that it holds true for all e.g., violent crime is falling in the U.S. therefore it must be falling in my locale. There is also an added danger when aggregating data that comes from different sources/agencies that the way crimes are reported and classified is not uniform, resulting in different crimes being lumped together, and so skewing the results.
When we consider all the potential issues with crime statistics it would seem that they are potentially too unreliable to be useful i.e., we can’t draw any concrete conclusions from them. However, I would argue that this isn’t the case, and that we can have confidence in what they are telling us, as long as we understand the source of the underlying data and the methodology that was applied. Statistics fall down when we use them in a lazy, haphazard manner, or latch on to results which confirm our own biases without looking at that research which may contradict, or color it etc. By using multiple sources and looking at the methodologies used to produce the result, we can understand the limits of a statistic and then go about exploring how we can answer the questions it raises, rather than simply looking on it as a conclusion.