Publications
The 'Wild West' of Medicine: Exploring the Emergence of 'Grassroots' AI Governance in Radiology
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(2), 1018-1031.
In the past 10 years, a steady increase in clinical AI adoption has been accompanied by concerns regarding potential risks. Thus, there has been a growing body of literature on the regulatory implications of AI devices, and studies exploring clinician attitudes towards AI. However, there has been limited work examining ‘bottom-up’hospital-level AI governance approaches. To fill this gap, we conducted a qualitative study interviewing 22 healthcare practitioners with AI governance experience within radiology departments and/or professional societies in the US and UK. We aimed to understand the current state of AI adoption and governance, clinician perspectives on responsibility, and the interaction between ‘top-down’and ‘bottom-up’governance approaches. Our findings indicate disparities in resources and AI expertise, as well as differences in the scope, composition, remit, and role of AI governance committees across hospitals. Additionally, we uncover emerging challenges in negotiating responsibility norms for AI outcomes and performance monitoring. We also discuss the AI governance roles taken on by some clinicians, often on a voluntary basis, and the challenges they face in navigating siloed, hierarchical organizations. Finally, we analyze participant recommendations, including the development of streamlined guidance on responsible AI adoption, better staff education/training, and centralized approaches to performance monitoring.
Artificial Intelligence and the imperative of responsibility: reconceiving AI governance as social care
The Routledge Handbook of Philosophy of Responsibility, 1st edn, by Maximilian Kiener. Routledge.
The accelerating development of artificial intelligence (AI) systems has generated acute and interlinked challenges for social trust, responsibility ascription, and governance. While today’s AI tools lack the type of agency that can bear responsibility, they are deployed in ways that create novel configurations and social appearances of agential power. That is, they allow new things to be done by us, for us, and to us, in ways that do not easily fit our existing practices for governing moral and legal responsibility. This is commonly referred to as the problem of AI ‘responsibility gaps’. We confront this challenge by framing normative responsibility in a new way: not as a fact about agents to be discovered, nor a set of criteria that responsible agents must satisfy, but as a relational practice of social care in the exercise of power, that responds to others’ vulnerability to our power. Drawing from examples in steamboat engineering, consumer finance, and environmental governance, we highlight how responsibility gaps have historically generated the moral and political imperative to construct new forms of responsible agency to balance novel agential powers, of which AI is merely the latest iteration.