There is no question that algorithms can make biased or unfair decisions, but Financial Times columnist Stephen Bush’s assertion that a handful of high-profile cases portend the widespread erosion of democracy and civil liberties fans the flames of AI fears, detracting from productive analysis of the actual likelihood of widespread biased AI, and diverting attention away from more insidious causes of these problems. For one, the article incorrectly states AI use in public policy and business decision-making is commonplace, making it easy to paint a picture of AI misuse quietly mushrooming out of control. In reality, adoption of AI in the U.K. public sector is limited, with most examples still under development or at a proof-of-concept stage, while business adoption stands only at 15 percent. More importantly, the article uses AI as a convenient target for moral indignation about systemic problems in the criminal justice system, but in a way that does not advance momentum toward change. For instance, the article lambasts AI-enabled risk assessment tools that decide if an accused person should be allowed bail by predicting the likelihood they will miss a future court appointment. But the underlying social problem, which is that many people cannot afford bail so they must remain in jail for weeks or months while awaiting trial, is not one that AI created nor is it one that transparency into algorithms and data can solve. Ultimately, AI myopia in alarmist critiques distracts from more pernicious problems the government should address.
Image credits: Made using DALL-E-2.