The aim of blended cognition is to contribute to the design of more realistic and efficient robots by looking at the way humans can combine several kinds of affective, cognitive, sensorimotor and perceptual representations. This chapter is about vision-for-action. In humans and non-human primates (as well as in most of mammals), motor behavior in general and visuomotor representations for grasping in particular are influenced by emotions and affective perception of the salient properties of the environment. This aspect of motor interaction is not examined in depth in the biologically plausible robot models of grasping that are currently available. The aim of this chapter is to propose a model that can help us to make neurorobotics solutions more embodied, by integrating empirical evidence from affective neuroscience with neural evidence from vision and motor neuroscience. Our integration constitutes an attempt to make a neurorobotic model of vision and grasping more compatible with the insights proposed by the embodied view of cognition and perception followed in neuroscience, which seems to be the only one able to take into account the biological complexity of cognitive systems and, accordingly, to duly explain the high flexibility and adaptability of cognitive systems with respect to the environment they inhabit.

(2019). Can Our Robots Rely on an Emotionally Charged Vision-for-Action? An Embodied Model for Neurorobotics . Retrieved from https://hdl.handle.net/10446/284872

Can Our Robots Rely on an Emotionally Charged Vision-for-Action? An Embodied Model for Neurorobotics

Ferretti, Gabriele;
2019-01-01

Abstract

The aim of blended cognition is to contribute to the design of more realistic and efficient robots by looking at the way humans can combine several kinds of affective, cognitive, sensorimotor and perceptual representations. This chapter is about vision-for-action. In humans and non-human primates (as well as in most of mammals), motor behavior in general and visuomotor representations for grasping in particular are influenced by emotions and affective perception of the salient properties of the environment. This aspect of motor interaction is not examined in depth in the biologically plausible robot models of grasping that are currently available. The aim of this chapter is to propose a model that can help us to make neurorobotics solutions more embodied, by integrating empirical evidence from affective neuroscience with neural evidence from vision and motor neuroscience. Our integration constitutes an attempt to make a neurorobotic model of vision and grasping more compatible with the insights proposed by the embodied view of cognition and perception followed in neuroscience, which seems to be the only one able to take into account the biological complexity of cognitive systems and, accordingly, to duly explain the high flexibility and adaptability of cognitive systems with respect to the environment they inhabit.
2019
Ferretti, Gabriele; Chinellato, Eris
File allegato/i alla scheda:
File Dimensione del file Formato  
Blended Cognition.pdf

Solo gestori di archivio

Versione: publisher's version - versione editoriale
Licenza: Licenza default Aisberg
Dimensione del file 5.38 MB
Formato Adobe PDF
5.38 MB Adobe PDF   Visualizza/Apri
Pubblicazioni consigliate

Aisberg ©2008 Servizi bibliotecari, Università degli studi di Bergamo | Terms of use/Condizioni di utilizzo

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10446/284872
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact