Bias in Human Decision-Making

The ideas in this article, in the context of our expectations of Amazon's same day delivery service, appeared in The ConversationSlateNew Republic, and USNews.

There has been considerable recent press on algorithmic bias.  There is even a workshop series devoted to this topic.  Given the importance of algorithms to decision-making in today’s world of Big Data, this attention to accountability and fairness is well-deserved.  However, the tone of the discussion is far too alarmist in my opinion.

Humans are known to carry biases, and these biases impact human decision-making even when we try to be unbiased.  For example, a simple tool was much more frequently mis-identified as a gun in the hands of a black man than in the hands of a white one.  Such latent biases undoubtedly impact every day life decisions, whether it is a policeman deciding whether someone is an armed threat or an innocent citizen or an interviewer deciding how “intelligent” a job applicant is. On account of our history, some types of bias may be more salient, such as that based on race or sex.  Nevertheless, many other types of biases may be equally prevalent: for instance, based on hair style, dress, or voice pitch.

Since we are human, and are comfortable with accepting human limitations, we have learned how to live with and manage these biases.  But these biases are complex, and not easily managed.  Even when we try our best not to be biased, our brains often fool us.  For a few crucial biases that are socially unacceptable, we have a societal definition of discrimination, and explicit attempts to overcome it. 

With algorithms making decisions, there are those who claim that the “data speak for themselves”.   This is far too naïve a view: algorithm output depends very much on the data selected for input, the model chosen, many representation choices along the way, and so on.  On the other hand, it appears there are many who worry about algorithmic bias because the algorithm is making decisions in a way that they do not understand: this is akin to the frustration you would feel if decisions of importance to you were made by an inscrutable human whose motives and values you were unable to interpret.

With machines making decisions, biases become much easier to quantify.  We not only know the fractions along each dimension, but also how the decisions were made.  If we are able to quantify what fairness means, we can ensure that the machine meets this definition of fairness. 

For traditional, human, decision-making, there are two main avenues used to ensure fairness.  One is by prohibiting the consideration of certain factors, such as age or race.  Another is by measuring the outcome, and presuming discrimination if the outcome is not proportionate.  For example, if women make up 50% of the application pool, but only 20% of those hired, we may ask a company to explain why this isn’t evidence of gender discrimination.  Both of these avenues are straightforward to implement computationally.  So it should be straightforward to build computer algorithms that satisfy the definition of fairness for humans.

In short, humans are not perfect.  Human decisions are biased, in many ways.  Machines will not be perfect either.  But bias in machines can be measured and controlled, giving us machine decisions that can be less biased than humans.  The goal of algorithmic accountability should be not just to avoid egregious algorithmic bias, but rather to provide us with decisions much less biased than human decisions.