MaLGa logoMaLGa black extendedMaLGa white extendedUniGe ¦ MaLGaUniGe ¦ MaLGaUniversita di Genova | MaLGaUniversita di GenovaUniGe ¦ EcoSystemics
Seminar

MaLGa Colloquia - Towards a synergistic human-machine interaction and collaboration: XAI and Hybrid Decision Making Systems. State-of-the-art and research questions.

25/11/2024

Fosca.Giannotti - [Fotografia con flash Contento Persone nella natura]

Title

MaLGa Colloquia - Towards a synergistic human-machine interaction and collaboration: XAI and Hybrid Decision Making Systems. State-of-the-art and research questions.


Speaker

Fosca Giannotti - Scuola Normale Superiore (SNS), Pisa, Italy


Abstract

Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for the lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. Explaining to humans how AI reasons is only a part of the problem, we must then be able to design AI systems that understand and collaborate with humans: Hybrid decision-making systems aim at leveraging the strengths of both human and machine agents to overcome the limitations that arise when either agent operates in isolation.

This lecture provides a reasoned introduction to the work of Explainable AI (XAI) to date, and then will focus on paradigms in support of synergistic human-machine interaction and collaboration to improve joint performance in high-stake decision-making. Three distinct paradigms, characterized by a different degree of human agency will be discussed: i) human oversight, with a human expert monitoring AI prediction augmented with explanation; ii) Learning to defer, in which the machine learning model is given the possibility to abstain from making a prediction when it receives an instance where the risk of making a misprediction is too large; iii) collaborative and interactive learning, in which human and AI engage in communication to integrate their distinct knowledge and facilitate the human's ability to make informed decisions.

This lecture is a joint work with: Clara Punzi, Mattia Setzu and Roberto Pellungrini.


Bio

Fosca Giannotti is Professor of Computer Science at Scuola Normale Superiore (SNS), Pisa, Italy. She co-leads the Pisa KDD Lab, a joint research initiative of the University of Pisa, ISTI-CNR and SNS.

Her research focuses on methods for trustworthy and human-centered explainable AI, and on their application for understanding and predicting complex phenomena both at individual and social scale in domains characterized by human-machine collaboration. Exemplar case studies have been performed in medicine, human mobility, finance and on-line social behavior. She is the PI of the ERC project “XAI – Science and Technology for the Explanation of Decision Making”. She is author of more than 300 papers and has coordinated tens European projects and industrial collaborations. Professor Giannotti is deputy-director of SoBigData ++, the European research infrastructure on Big Data Analytics and Social Mining, an ecosystem of tens cutting edge European research centers providing an open platform for interdisciplinary data science and data-driven innovation.

On March 8, 2019 she has been features as one of the 19 Inspiring women in AI, Data Science, Machine Learning by KDnuggets.com, the leading site on AI, Data Mining and Machine Learning https://www.kdnuggets.

Since February 2020 Fosca Giannotti is the Horizon Europe's Italian Delegate of Cluster 4 (Digital, Industry and Space).


When

Monday November 25th, 16:00


Where

Room 322, UniGe DIBRIS/DIMA, Via Dodecaneso 35