The development of AI tools, such as large language models and speech emotion and facial expression recognition systems, has raised new ethical concerns about AI’s impact on human relationships. While much of the debate has focused on human-AI relationships, less attention has been devoted to another class of ethical issues, which arise when AI mediates human-to-human relationships. This paper opens the debate on these issues by analyzing the case of romantic relationships, particularly those in which one partner uses AI tools, such as ChatGPT, to resolve a conflict and apologize. After reviewing some possible, non-exhaustive, explanations for the moral wrongness of using AI tools in such cases, I introduce the notion of second-person authenticity: a form of authenticity that is assessed by the other per- son in the relationship (e.g., a partner). I then argue that at least some actions within romantic relationships should respect a standard of authentic conduct since the value of such actions depends on who actually performs them and not only on the quality of the outcome produced. Therefore, using AI tools in such circumstances may prevent agents from meeting this standard. I conclude by suggesting that the proposed theoretical framework could also apply to other human-to-human relationships, such as the doctor-patient relationship, when these are mediated by AI; I offer some preliminary reflections on such applications.
(2025). Second-Person Authenticity and the Mediating Role of AI: A Moral Challenge for Human-to-Human Relationships? [journal article - articolo]. In PHILOSOPHY & TECHNOLOGY. Retrieved from https://hdl.handle.net/10446/296570
Second-Person Authenticity and the Mediating Role of AI: A Moral Challenge for Human-to-Human Relationships?
Battisti, Davide
2025-01-01
Abstract
The development of AI tools, such as large language models and speech emotion and facial expression recognition systems, has raised new ethical concerns about AI’s impact on human relationships. While much of the debate has focused on human-AI relationships, less attention has been devoted to another class of ethical issues, which arise when AI mediates human-to-human relationships. This paper opens the debate on these issues by analyzing the case of romantic relationships, particularly those in which one partner uses AI tools, such as ChatGPT, to resolve a conflict and apologize. After reviewing some possible, non-exhaustive, explanations for the moral wrongness of using AI tools in such cases, I introduce the notion of second-person authenticity: a form of authenticity that is assessed by the other per- son in the relationship (e.g., a partner). I then argue that at least some actions within romantic relationships should respect a standard of authentic conduct since the value of such actions depends on who actually performs them and not only on the quality of the outcome produced. Therefore, using AI tools in such circumstances may prevent agents from meeting this standard. I conclude by suggesting that the proposed theoretical framework could also apply to other human-to-human relationships, such as the doctor-patient relationship, when these are mediated by AI; I offer some preliminary reflections on such applications.File | Dimensione del file | Formato | |
---|---|---|---|
s13347-025-00857-w.pdf
accesso aperto
Versione:
publisher's version - versione editoriale
Licenza:
Creative commons
Dimensione del file
699.12 kB
Formato
Adobe PDF
|
699.12 kB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
Aisberg ©2008 Servizi bibliotecari, Università degli studi di Bergamo | Terms of use/Condizioni di utilizzo