As explained in the ANPR, the Commission is considering whether it should implement new regulatory measures concerning the ways in which companies collect, aggregate, protect, use, analyze, and retain consumer data, as well as transfer, share, sell, or otherwise monetize that data in ways that are unfair or deceptive.
The Commission is particularly concerned about “companies’ growing reliance on automated systems is creating new forms and mechanisms for discrimination based on statutorily protected categories, including in critical areas such as housing, employment, and healthcare.” To address these concerns, the Commission appears to be focused on regulatory measures that would minimize algorithmic error. The Commission also appears to be under the impression that companies have the ability and responsibility to minimize these errors through their own individual actions.
Our comments explain why algorithmic fairness is not simply a function of minimizing error rates and why the FTC should consider the effect of any particular decision system (whether algorithmic or human) on inequality as a whole rather than focusing exclusively on automated systems.
- How prevalent is algorithmic error? To what extent is algorithmic error inevitable? If it is inevitable, what are the benefits and costs of allowing companies to employ automated decision-making systems in critical areas, such as housing, credit, and employment? To what extent can companies mitigate algorithmic error in the absence of new trade regulation rules?
- To what extent, if at all, should new rules require companies to take specific steps to prevent algorithmic errors? If so, which steps? To what extent, if at all, should the Commission require firms to evaluate and certify that their reliance on automated decision-making meets clear standards concerning accuracy, validity, reliability, or error? If so, how? Who should set those standards, the FTC or a third-party entity? Or should new rules require businesses to evaluate and certify that the accuracy, validity, or reliability of their commercial surveillance practices are in accordance with their own published business policies?
- To what extent, if at all, do consumers benefit from automated decision-making systems? Who is most likely to benefit? Who is most likely to be harmed or disadvantaged? To what extent do such practices violate Section 5 of the FTC Act?
- If new rules restrict certain automated decision-making practices, which alternatives, if any, would take their place? Would these alternative techniques be less prone to error than the automated decision-making they replace?