Neural networks (NNs) play a crucial role in safety-critical fields, requiring robustness assurance. Bayesian Neural Networks (BNNs) address data uncertainty, providing probabilistic outputs. However, the literature on BNN robustness assessment is still limited, mainly focusing on adversarial examples, which are often impractical in real-world applications. This paper introduces a fresh perspective on BNN classifier robustness, considering natural input variations while accounting for prediction uncertainties. Our approach excludes predictions labeled as "unknown", enabling practitioners to define alteration probabilities, penalize errors beyond a specified threshold, and tolerate varying error levels below it. We present a systematic approach for evaluating the robustness of BNNs, introducing new evaluation metrics that account for prediction uncertainty. We conduct a comparative study using two NNs - standard MLP and Bayesian MLP - on the MNIST dataset. Our results show that by leveraging estimated uncertainty, it is possible to enhance the system's robustness.

(2024). A Framework for Including Uncertainty in Robustness Evaluation of Bayesian Neural Network Classifiers . Retrieved from https://hdl.handle.net/10446/281289

A Framework for Including Uncertainty in Robustness Evaluation of Bayesian Neural Network Classifiers

Bombarda, Andrea;Bonfanti, Silvia;Gargantini, Angelo
2024-01-01

Abstract

Neural networks (NNs) play a crucial role in safety-critical fields, requiring robustness assurance. Bayesian Neural Networks (BNNs) address data uncertainty, providing probabilistic outputs. However, the literature on BNN robustness assessment is still limited, mainly focusing on adversarial examples, which are often impractical in real-world applications. This paper introduces a fresh perspective on BNN classifier robustness, considering natural input variations while accounting for prediction uncertainties. Our approach excludes predictions labeled as "unknown", enabling practitioners to define alteration probabilities, penalize errors beyond a specified threshold, and tolerate varying error levels below it. We present a systematic approach for evaluating the robustness of BNNs, introducing new evaluation metrics that account for prediction uncertainty. We conduct a comparative study using two NNs - standard MLP and Bayesian MLP - on the MNIST dataset. Our results show that by leveraging estimated uncertainty, it is possible to enhance the system's robustness.
2024
Essbai, Wasim; Bombarda, Andrea; Bonfanti, Silvia; Gargantini, Angelo Michele
File allegato/i alla scheda:
File Dimensione del file Formato  
3643786.3648026-2.pdf

accesso aperto

Versione: publisher's version - versione editoriale
Licenza: Creative commons
Dimensione del file 638.35 kB
Formato Adobe PDF
638.35 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Aisberg ©2008 Servizi bibliotecari, Università degli studi di Bergamo | Terms of use/Condizioni di utilizzo

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10446/281289
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact