The achievements of contemporary machine learning (ML) methods highlight the enormous potential of integrating AI systems in various domains of medicine, ranging from the analysis of diagnostic images in radiology and dermatology to increasingly complex applications such as forecasting in intensive care units or the diagnosis of psychiatric disorders. However, despite their potential, many medical professionals are sceptical toward the integration of machine learning tools in their practices. The reasons for this scepticism are mostly related to opacity, or so-called black-box, problem, which refers to the difficulty of humans to understand the reasoning behind the outcomes of ML models and ultimately decide whether to trust them or not.
Much effort has been dedicated in the last year to overcome such difficulty, both from a policy and ethical but also engineering and design perspectives. Nevertheless there is still much disagreement among scholars on the real effectiveness of the various proposed solutions. For example, alongside the enthusiasts, there is an increasing number of sceptics concerning the real usefulness of the programme of Explainable AI (XAI). Examples of questions include: if, how and to what extent the various solutions that have been proposed to overcome the AI opacity problem can be effective in supporting the successful integration and appropriation of AI systems in medicine? What other paradigms lead to adoption of AI aside from explainability? Which health system actor should be held accountable for explainability of AI? Which kind of solutions are more suitable for supporting a successful appropriation and integration into medical practices? How and to what extent actual XAI methods should be improved or changed in view of their drawbacks highlighted by some empirical research?
With the goal of continuing their scientific collaborations on the ethics of AI systems, the College of Humanities at EPFL, the Digital Society Initiative at the University of Zurich, and the Swiss AI Lab IDSIA USI-SUPSI are organising in Lugano on November 2-3 2023 an international meeting aimed at bringing together philosophers, computer scientists, engineers and medical professionals to discuss and outline possible answers to the aforementioned crucial questions.
Details about the program will be made available soon here.
In addition to the invited speakers, we invite the submission of abstracts for the meeting from early career scholars (students, postdoctoral researchers, and junior faculty).
Abstracts should be suitable for a 20-minute talk (plus some time for discussion) and should be submitted as pdf files by August 31 2023 to: felix.gille@uzh.ch
Submissions should include name, affiliation, title and an extended abstract (up to 500 words, not including references) in one .pdf file.
Notification of outcomes will be made by the end of September 2023.
We are committed to fostering diversity and equality. Submissions from underrepresented groups are particularly welcome.