Episode

How to Test Models for Fairness with Fairlearn Deep-Dive

Join us to learn about our open source machine learning fairness toolkit, Fairlearn, which empowers developers of artificial intelligence systems to assess their systems' fairness and mitigate any observed fairness issues. Fairlearn focuses on negative impacts for groups of people, such as those defined in terms of race, gender, age, or disability status.
There are two components to Fairlearn: The first is an assessment dashboard, with both high-level and detailed views, for assessing which groups are negatively impacted. The second is a set of strategies for mitigating fairness issues. These strategies are easy to incorporate into existing machine-learning pipelines. Together, these components empower data scientists and business leaders to navigate any trade-offs between fairness and performance, and to select the mitigation strategy that best fits their needs.