Bias and unfairness in Machine Learning (ML) are challenging to detect and mitigate, particularly in critical fields such as finance, hiring, and healthcare. While numerous unfairness mitigation techniques exist, most evaluation frameworks assess only a limited set of fairness metrics, primarily focusing on the trade-off between fairness and accuracy. We introduce FAIR-CARE, a new open-source and robust approach that consists of an evaluation pipeline designed for the systematic assessment of unfairness mitigation techniques. Our approach simultaneously evaluates multiple fairness and performance metrics across various ML models. We conduct a comparative analysis on healthcare datasets with diverse distributions—including target class, protected attribute, and their joint distributions—to identify the most effective mitigation technique for each processing type (pre-, in-, and post-processing). Furthermore, we determine the best-performing techniques across different datasets, fairness metrics, performance metrics, and ML models. Finally, we provide practical insights into the application of these techniques, offering actionable guidance for both researchers and practitioners.

(2026). FAIR-CARE: A comparative evaluation of unfairness mitigation approaches [journal article - articolo]. In INFORMATION AND SOFTWARE TECHNOLOGY. Retrieved from https://hdl.handle.net/10446/317050

FAIR-CARE: A comparative evaluation of unfairness mitigation approaches

Salnitri, Mattia;
2026-01-01

Abstract

Bias and unfairness in Machine Learning (ML) are challenging to detect and mitigate, particularly in critical fields such as finance, hiring, and healthcare. While numerous unfairness mitigation techniques exist, most evaluation frameworks assess only a limited set of fairness metrics, primarily focusing on the trade-off between fairness and accuracy. We introduce FAIR-CARE, a new open-source and robust approach that consists of an evaluation pipeline designed for the systematic assessment of unfairness mitigation techniques. Our approach simultaneously evaluates multiple fairness and performance metrics across various ML models. We conduct a comparative analysis on healthcare datasets with diverse distributions—including target class, protected attribute, and their joint distributions—to identify the most effective mitigation technique for each processing type (pre-, in-, and post-processing). Furthermore, we determine the best-performing techniques across different datasets, fairness metrics, performance metrics, and ML models. Finally, we provide practical insights into the application of these techniques, offering actionable guidance for both researchers and practitioners.
articolo
2026
Criscuolo, Chiara; Salnitri, Mattia; Martinenghi, Davide
(2026). FAIR-CARE: A comparative evaluation of unfairness mitigation approaches [journal article - articolo]. In INFORMATION AND SOFTWARE TECHNOLOGY. Retrieved from https://hdl.handle.net/10446/317050
File allegato/i alla scheda:
File Dimensione del file Formato  
2025 FAIR-CARE A comparative evaluation of unfairness mitigation approaches.pdf

accesso aperto

Descrizione: Articolo
Versione: publisher's version - versione editoriale
Licenza: Creative commons
Dimensione del file 2.16 MB
Formato Adobe PDF
2.16 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Aisberg ©2008 Servizi bibliotecari, Università degli studi di Bergamo | Terms of use/Condizioni di utilizzo

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10446/317050
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact