The Promise and Perils of Predictive Policing Based on Big Data

Originally published in The Conversation.  Also appeared in Gizmodo and Slate.

tariqabjotu Police Line Up CC-BY-2.0

tariqabjotu Police Line Up CC-BY-2.0

Police departments, like everyone else, would like to be more effective while spending less. Given the tremendous attention to big data in recent years, and the value it has provided in fields ranging from astronomy to medicine, it should be no surprise that police departments are using data analysis to inform deployment of scarce resources. Enter the era of what is called “predictive policing.”

Some form of predictive policing is likely now in force in a city near you.Memphis was an early adopter. Cities from Minneapolis to Miami have embraced predictive policing. Time magazine named predictive policing (with particular reference to the city of Santa Cruz) one of the 50 best inventions of 2011. New York City Police Commissioner William Bratton recently said that predictive policing is “the wave of the future.”

The term “predictive policing” suggests that the police can anticipate a crime and be there to stop it before it happens and/or apprehend the culprits right away. As the Los Angeles Times points out, it depends on “sophisticated computer analysis of information about previous crimes, to predict where and when crimes will occur.”

At a very basic level, it’s easy for anyone to read a crime map and identify neighborhoods with higher crime rates. It’s also easy to recognize that burglars tend to target businesses at night, when they are unoccupied, and to target homes during the day, when residents are away at work. The challenge is to take a combination of dozens of such factors to determine where crimes are more likely to happen and who is more likely to commit them. Predictive policing algorithms are getting increasingly good at such analysis. Indeed, such was the premise of the movie Minority Report, in which the police can arrest and convict murderers before they commit their crime.

Tom Cruise in Minority Report  eyeliam CC-BY-2.0

Tom Cruise in Minority Report  eyeliam CC-BY-2.0

Predicting a crime with certainty is something that science fiction can have a field day with. But as a data scientist, I can assure you that in reality we can come nowhere close to certainty, even with advanced technology. To begin with,predictions can be only as good as the input data, and quite often these input data have errors.

But even with perfect, error-free input data and unbiased processing, ultimately what the algorithms are determining are correlations. Even if we have perfect knowledge of your troubled childhood, your socializing with gang members, your lack of steady employment, your wacko posts on social media and your recent gun purchases, all that the best algorithm can do is to say it is likely, but not certain, that you will commit a violent crime. After all, to believe such predictions as guaranteed is to deny free will.

Feed in data, get out probabilities

What data can do is give us probabilities, rather than certainty. Good data coupled with good analysis can give us very good estimates of probability. If you sum probabilities over many instances, you can usually get a robust estimate of the total.

For example, data analysis can provide a probability that a particular house will be broken into on a particular day based on historical records for similar houses in that neighborhood on similar days. An insurance company may add this up over all days in a year to decide how much to charge for insuring that house.

A police department may add up these probabilities across all houses in a neighborhood to estimate how likely it is that there will be a burglary in that neighborhood. They can then place more officers in neighborhoods with higher probabilities for crime with the idea that police presence may deter crime. This seems like a win all around: less crime and targeted use of police resources. Indeed the statistics, in terms of reduced crime rates, support our intuitive expectations.

Predictive Policing goes beyond just mapping crime locations    Brett Lider  CC-BY-SA-2.0

Predictive Policing goes beyond just mapping crime locations    Brett Lider  CC-BY-SA-2.0

Likely doesn’t mean definitely

Similar arguments can be used in multiple arenas where we’re faced with limited resources. Realistically, customs agents cannot thoroughly search every passenger and every bag. Tax authorities cannot audit every tax return. So they target the “most likely” culprits. But likelihood is very far from certainty: all the authorities know is that the odds are higher. Undoubtedly many innocent individuals are labeled “likely.” If you’re innocent but get targeted, it can be a big hassle, or worse.

Incorrectly targeted individuals may be inconvenienced by a customs search, but predictive policing can do real harm. Consider the case of Tyrone Brown, recently reported in The New York Times. He was specifically targeted for attention by the Kansas City police because he was friends with known gang members. In other words, the algorithm picked him out as having a higher likelihood of committing a crime based on the company he kept. They told him he was being watched and would be dealt with severely if he slipped up.

The algorithm didn’t “make a mistake” in picking out someone like Tyrone Brown. It may have correctly determined that Tyrone was more likely to commit a murder than you or I. But that is very different from saying that he did (or will) kill someone.

Suppose there’s a one-in-a-million chance that a typical citizen will commit a murder, but there is a one-in-a-thousand chance that Tyrone will. That makes him a thousand times as likely to commit a murder as a typical citizen. So it makes sense statistically for the police to focus their attention on him. But don’t forget that there is only a one-in-a-thousand chance that he commits a murder. For a thousand such “suspect” Tyrones, there is only one who is a murderer and 999 who are innocent. How much are we willing to inconvenience or harm the 999 to stop the one?

Kansas city is far from being alone in this sort of preemptive contact with citizens identified as “likely to commit crimes.” Last year, there was considerable controversy over a similar program in Chicago.

Balancing crime reduction with civil rights

Such tactics, even if effective in reducing crime, raise civil liberty concerns. Suppose you fit the profile of a bad driver and have accumulated points on your driving record. Consider how you would feel if you had a patrol car follow you every time you got behind the wheel. Even worse, it’s likely, even if you’re doing your best, that you will make an occasional mistake. For most of us, rolling through a stop sign or driving five miles above the speed limit is usually of little consequence. But since you have a cop following you, you get a ticket for every small offense. In consequence, you end up with an even worse driving record.

Yes, data can help make predictions, and these predictions can help police expend their resources smarter. But we must remember that a probabilistic prediction is not certainty, and explicitly consider the harm to innocent people when we take actions based on probabilities. More broadly speaking, data science can bring us many benefits, but care is required to make sure that it does so in a fair manner.

Bias in Human Decision-Making

The ideas in this article, in the context of our expectations of Amazon's same day delivery service, appeared in The ConversationSlateNew Republic, and USNews.

There has been considerable recent press on algorithmic bias.  There is even a workshop series devoted to this topic.  Given the importance of algorithms to decision-making in today’s world of Big Data, this attention to accountability and fairness is well-deserved.  However, the tone of the discussion is far too alarmist in my opinion.

Humans are known to carry biases, and these biases impact human decision-making even when we try to be unbiased.  For example, a simple tool was much more frequently mis-identified as a gun in the hands of a black man than in the hands of a white one.  Such latent biases undoubtedly impact every day life decisions, whether it is a policeman deciding whether someone is an armed threat or an innocent citizen or an interviewer deciding how “intelligent” a job applicant is. On account of our history, some types of bias may be more salient, such as that based on race or sex.  Nevertheless, many other types of biases may be equally prevalent: for instance, based on hair style, dress, or voice pitch.

Since we are human, and are comfortable with accepting human limitations, we have learned how to live with and manage these biases.  But these biases are complex, and not easily managed.  Even when we try our best not to be biased, our brains often fool us.  For a few crucial biases that are socially unacceptable, we have a societal definition of discrimination, and explicit attempts to overcome it. 

With algorithms making decisions, there are those who claim that the “data speak for themselves”.   This is far too naïve a view: algorithm output depends very much on the data selected for input, the model chosen, many representation choices along the way, and so on.  On the other hand, it appears there are many who worry about algorithmic bias because the algorithm is making decisions in a way that they do not understand: this is akin to the frustration you would feel if decisions of importance to you were made by an inscrutable human whose motives and values you were unable to interpret.

With machines making decisions, biases become much easier to quantify.  We not only know the fractions along each dimension, but also how the decisions were made.  If we are able to quantify what fairness means, we can ensure that the machine meets this definition of fairness. 

For traditional, human, decision-making, there are two main avenues used to ensure fairness.  One is by prohibiting the consideration of certain factors, such as age or race.  Another is by measuring the outcome, and presuming discrimination if the outcome is not proportionate.  For example, if women make up 50% of the application pool, but only 20% of those hired, we may ask a company to explain why this isn’t evidence of gender discrimination.  Both of these avenues are straightforward to implement computationally.  So it should be straightforward to build computer algorithms that satisfy the definition of fairness for humans.

In short, humans are not perfect.  Human decisions are biased, in many ways.  Machines will not be perfect either.  But bias in machines can be measured and controlled, giving us machine decisions that can be less biased than humans.  The goal of algorithmic accountability should be not just to avoid egregious algorithmic bias, but rather to provide us with decisions much less biased than human decisions.

A Dog by Any Other Name: The Use of Surrogate Variables

No one wants vote fraud and no one wants voter disenfranchisement.  Voter ID requirements are meant to find a path between these two competing problems.  If you follow the political debates on this topic, you will see every Republican more concerned about vote fraud and every Democrat more concerned about voter disenfranchisement.  The reason for this divide is rather obvious: the segment of the population most impacted by voter ID laws overwhelmingly votes Democratic, so it is in Republicans’ interest to allow as few votes as possible from this segment and in the Democrats’ interest to allow as many as possible.  This is a scenario with an obvious surrogate variable.

Many universities have a strong belief in the value of a diverse student body and strive hard to achieve this diversity.  Race is one axis along which most universities desire diversity.  However, laws limit direct consideration of race.  However, there usually are no limits on indirect considerations of race through other variables that may be highly correlated with race.  Indeed, many universities have found legal ways to achieve diversity.

In short, there often are other variables that can be considered, if desired, in lieu of variables that are disallowed.  Doing this manually is not easy.  Computers can do this really well, though.  But computers can also measure how well they are doing.  And we can require computers to make sure they don’t choose combinations of variables that are “too good” as surrogates.