Member-only story

Reasons for Systemic Bias in Machine Learning for Policing and Other Applications

Analyzing why bias occurs and how we can practically address it

Alan Liu
4 min readJun 29, 2020

Recently, Microsoft, Amazon and IBM all agreed not to sell their facial recognition technology for law enforcement use over the next year amid findings of racial bias across many such systems. Much press has pointed to the bias in such tools to suggest that they should be banned entirely from police usage. Personally, I disagree with this binary approach. The reality is that developing and using this technology requires certain philosophical tradeoffs. As someone that works with machine learning, I’d like to shed some light on how and why this bias is introduced into such algorithms and suggest practical approaches to adopting this technology fairly and justly.

Before discussing the algorithms themselves, I’ll affirm the philosophical constraints that any machine learning application must solve, namely that people of different protected classes should be treated equally and that we should try to maximize benefits for society as a whole. These two constraints are unfortunately fundamentally at odds with each other in any society where protected classes are not equally represented in society (as they usually aren’t). Under the “no free lunch” principle, having a more fair…

--

--

Alan Liu
Alan Liu

Written by Alan Liu

CEO/Cofounder @ Health Harbor | Formerly Nuro/Facebook/Google | Yale ’18 | alanliu.dev

No responses yet