Big data poses challenges in maintaining ethical standards for reliable outcomes in machine learning. Data that inaccurately represent populations may result in biased algorithmic models, whose application leads to unfair decisions in delicate fields such as medicine and industry. To address this issue, many fairness mitigation techniques have been introduced, but the proliferation of overlapping methods complicates decision-making for data scientists. This paper proposes a taxonomy to organize these techniques and a pipeline for their evaluation, supporting practitioners in selecting the most suitable ones. The taxonomy classifies and describes techniques qualitatively, while the pipeline offers a quantitative framework for evaluation and comparison. The proposed approach supports data scientists in addressing biased models and data effectively.

(2024). Mitigating Unfairness in Machine Learning: A Taxonomy and an Evaluation Pipeline . Retrieved from https://hdl.handle.net/10446/295645

Mitigating Unfairness in Machine Learning: A Taxonomy and an Evaluation Pipeline

Salnitri, Mattia
2024-01-01

Abstract

Big data poses challenges in maintaining ethical standards for reliable outcomes in machine learning. Data that inaccurately represent populations may result in biased algorithmic models, whose application leads to unfair decisions in delicate fields such as medicine and industry. To address this issue, many fairness mitigation techniques have been introduced, but the proliferation of overlapping methods complicates decision-making for data scientists. This paper proposes a taxonomy to organize these techniques and a pipeline for their evaluation, supporting practitioners in selecting the most suitable ones. The taxonomy classifies and describes techniques qualitatively, while the pipeline offers a quantitative framework for evaluation and comparison. The proposed approach supports data scientists in addressing biased models and data effectively.
2024
Criscuolo, Chiara; Dolci, Tommaso; Salnitri, Mattia
File allegato/i alla scheda:
File Dimensione del file Formato  
2024_Mitigating Unfairness in Machine Learning.pdf

accesso aperto

Versione: publisher's version - versione editoriale
Licenza: Creative commons
Dimensione del file 689.39 kB
Formato Adobe PDF
689.39 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Aisberg ©2008 Servizi bibliotecari, Università degli studi di Bergamo | Terms of use/Condizioni di utilizzo

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10446/295645
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact