Modern software systems, such as smart systems, are based on a continuous interaction with the dynamic and partially unknown environment in which they are deployed. Classical development techniques, based on a complete description of how the system must behave in different environmental conditions, are no longer effective. On the contrary, modern techniques should be able to produce systems that autonomously learn how to behave in different environmental conditions. Machine learning techniques allow creating systems that learn how to execute a set of actions to achieve a desired goal. When a change occurs, the system can autonomously learn new policies and strategies for actions execution. This flexibility comes at a cost: the developer has no longer full control on the system behaviour. Thus, there is no way to guarantee that the system will not violate important properties, such as safety-critical properties. To overcome this issue, we believe that machine learning techniques should be combined with suitable reasoning mechanisms aimed at assuring that the decisions taken by the machine learning algorithm do not violate safety-critical requirements. This paper proposes an approach that combines machine learning with runtime monitoring to detect violations of system invariants in the actions execution policies.

(2018). Keeping intelligence under control . Retrieved from https://hdl.handle.net/10446/236951

Keeping intelligence under control

Menghi, Claudio
2018-01-01

Abstract

Modern software systems, such as smart systems, are based on a continuous interaction with the dynamic and partially unknown environment in which they are deployed. Classical development techniques, based on a complete description of how the system must behave in different environmental conditions, are no longer effective. On the contrary, modern techniques should be able to produce systems that autonomously learn how to behave in different environmental conditions. Machine learning techniques allow creating systems that learn how to execute a set of actions to achieve a desired goal. When a change occurs, the system can autonomously learn new policies and strategies for actions execution. This flexibility comes at a cost: the developer has no longer full control on the system behaviour. Thus, there is no way to guarantee that the system will not violate important properties, such as safety-critical properties. To overcome this issue, we believe that machine learning techniques should be combined with suitable reasoning mechanisms aimed at assuring that the decisions taken by the machine learning algorithm do not violate safety-critical requirements. This paper proposes an approach that combines machine learning with runtime monitoring to detect violations of system invariants in the actions execution policies.
2018
Mallozzi, Piergiuseppe; Pelliccione, Patrizio; Menghi, Claudio
File allegato/i alla scheda:
File Dimensione del file Formato  
3195555.3195558.pdf

Solo gestori di archivio

Versione: publisher's version - versione editoriale
Licenza: Licenza default Aisberg
Dimensione del file 146.2 kB
Formato Adobe PDF
146.2 kB Adobe PDF   Visualizza/Apri
Pubblicazioni consigliate

Aisberg ©2008 Servizi bibliotecari, Università degli studi di Bergamo | Terms of use/Condizioni di utilizzo

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10446/236951
Citazioni
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 3
social impact