AI Fairness 360

This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. We invite you to use and improve it.


These are ten state-of-the-art bias mitigation algorithms that can address bias throughout AI systems. Add more!

About this site

AI Fairness 360 was created by IBM Research and donated by IBM to the Linux Foundation AI & Data.

Additional research sites that advance other aspects of Trusted AI include:

AI Explainability 360
AI Privacy 360
Adversarial Robustness 360
Uncertainty Quantification 360
AI FactSheets 360