Google has published a new tool called the What-If Tool that allows users to visualize possible bias in machine learning models without knowing how to code. With the What-If Tool, users can manually edit their data and a model’s parameters to visualize how changing them would affect the model’s prediction. This allows users to identify when a model’s data or parameters introduces bias into the model, which can have undesirable effects such as unfairly discriminating against certain populations.