* If you would like to join the online seminars and have access to the zoom info, sent an email with the subject "Monthly Seminar - rTAIM" here.
|
#10 Edmund Ugar
27 May 2024 | 3:00pm - 4:00pm (Lisbon Time Zone)
[Zoom info: ID: 94305823062 | Password: 351181] In Light of Medical Machine Learning Technologies in Sub-Saharan Africa: African Agency Reconsidered #Seminar 10: The growing application of machine learning techniques in the second decade of the present century has made artificial intelligence to play a more significant role in medicine and, increasingly, in public health. Image recognition, in particular, has become extremely effective, and clinicians increasingly rely on machine learning technologies for clinical diagnosis and prognosis of medical conditions. While these technologies have proven to be relevant and efficient in carrying out medical diagnosis, in this talk, I argue that they threaten an integral aspect of what constitutes human agency for Africans – interpersonal relationships. Overreliance on machine learning technology for clinical diagnosis and recommendations may diminish the values of interpersonal relationships of identity and solidarity between patients and doctors in the sub-Sahel. I show how vital these normative values are in the making of a person and their agency within the sub-Saharan African value system. Short bio: Edmund Terem Ugar is a doctoral candidate in Philosophy at the University of Johannesburg, a research fellow at the Centre for the Philosophy of Epidemiology, Medicine and Public Health at Durham University, and a research coordinator at the University of Johannesburg Centre for Africa-China Studies. His research lies broadly within the Philosophy of Social Technologies within Medical and Healthcare Domains. #9 Ariane Hanemaayer
30 April 2024 | 3:00pm - 4:00pm (Lisbon Time Zone)
Reframing AI in clinical decision making: A critical sociological approach to historical ontology #Seminar 9: Artificial Intelligence, Machine Learning, and algorithmic technologies are often integrated into institutional settings on the basis of their ability to solve a human problem. In this seminar, I will focus on one such case, where AI and similar technologies have been implemented with the justification that they solve issues with clinical judgment, such as human error and variation in diagnostic classification. My argument introduces an historical ontology as a way of reframing current debates. Through historicizing debates about AI as a technological or ethical issue, I show how these underlying assumptions have shaped justification for and against the use of computing technologies and expert systems in medicine since the 1960s. While debates over responsibility, ethics, and machine capabilities are important to question when it comes to the role of technologies in medicine, it leaves aside other significant ontological questions about the technologies themselves and, in this case, clinical judgment. At the end of this seminar, I propose to clarify the kinds of questions that can be spotlighted through a critical approach, and make a case for reframing the predominant focus on technical and ethical issues. Short bio: Ariane Hanemaayer is an Associate Professor at Brandon University (Canada) in the departments of Sociology and Gender and Women's Studies, and Affiliate Scholar at the Max Planck Institute for the History of Science. #8 David Leucas
21 March 2024 | 3:00pm - 4:00pm (São Paulo Time Zone)
The Ethics of Facial Expression Recognition in AI #Seminar 8: The automated analysis of emotional facial expressions is among the main assets that arouses growing interest in many sectors, which find the capture of refined biometric data the raw material for creating psychological profiles for the most diverse purposes. It is then necessary to reflect on what FER (facial expression recognition) tools are actually capturing, what their epistemological, structural and pragmatic reliability is, in addition to the countless ethical implications involved. Almost all FER tools are based on a discrete affective model, understanding facial expressions as universal and evolutionarily inherited signals. But is the simple perception of facial movements enough to infer the myriad of processes contained in an emotion? Furthermore, would CNNs (convolutional neural networks) be capable of distinguishing emotional expressions from other affective and non-affective states? These are questions necessary for a proper understanding of the possibilities of using this technology. Short Bio: David Leucas, Psychologist, Master's student in Philosophy at PPGF/UFRJ. Master in non-verbal behavior and credibility analysis from UDIMA/Madrid, President of AIRES/UFRJ – Artificial Intelligence Robotics Ethics Society. #7 Jaroslav Malík
6 December 2023 | 2:30pm - 4:00pm (Lisbon Time Zone)
Explanatory AI in Medicine: The need for pragmatics of XAI #Seminar 7: The relationship between a clinician and a patient is built on trust, which ultimately rests on the transparency of medical methods. However, as neural networks (NNs) are employed in medicine, they prove problematic because NNs are opaque black-box systems. Thus, an explanatory gap is generated, undermining established trust between clinicians and patients. In this paper, I argue that to cross this gap, what is necessary is not only explainable AI (XAI) but also explanatory AI. A problem with current XAI methods is that they can, at best, provide causal How-explanations which only generate observations that do not get us to the underlying cause of an event. What is required is to understand the process of Why-explanations, which often involve counterfactual reasoning based on more basic observations. To do that, engaging with the clinical practice itself is necessary to formulate a standard for explanations. I will argue that when diagnosing, doctors often engage in selective abductive reasoning, where they hypothesise what might be the case depending on the available evidence. Therefore, a hypothesis-driven model of explanatory AI is required, which could be used to design an AI interpreter meant to provide more illuminating explanations of AI behaviour. I further argue that we must exploit the current rising transformer architecture paradigm. The reason is that transformer NNs have proven to be capable of the necessary abductive reasoning, alongside multi-modality and context-sensitivity, useful for human interpretable explanations. Short Bio: Jaroslav Malík is a PhD candidate from the University of Hradec Králové. His philosophical work explores the intersection between philosophy of mind, philosophy of technology and philosophical anthropology. #6 Klaus Gärtner
15 November 2023 | 11h00 - 12h30 (Lisbon Time Zone)
Conscious AI in Medicine - Is that really a good Idea? #Seminar 6: The relation between consciousness and artificial intelligence (AI) has puzzled humanity for a long time. This includes questions such as: Can AI be conscious? Or more recently, can a Large Language Model (LLM) be conscious? Should we create conscious AI? How dangerous would that creation be for humanity? And, would conscious AI suffer? It seems that our interest about this relation is, at least, twofold. First, we are concerned about the nature of consciousness and machines; and second, we are (ethically) troubled by the idea ‘what if AI could really be conscious’. In this talk, I will explore the latter issue. To do so, I will consider the problem of conscious AI in general and specifically ask, whether it is a good idea to create it. In my view, this matter is closely related to our concerns about whether conscious AI could be dangerous for humanity, whether AI can suffer and what we should conclude from that. I will then apply my thoughts to the case of medicine, where doubts about these issues seem even more salient. I will conclude that there is a dilemma here. On the one hand, conscious AI can help in a variety of patient care, but on the other hand, negative outcomes for both, the conscious AI and patients, cannot be excluded. Short bio: Klaus Gärtner is a researcher at the Departamento de História e Filosofia das Ciências at the Faculdade de Ciências da Universidade de Lisboa (DHFC/FCUL) and the Centro de Filosofia das Ciências da Universidade de Lisboa (CFCUL). . |
#5 Georgina Mills
11 October 2023 | 14h30 - 16h00 (Lisbon Time Zone)
Looping effects in medical kinds: Fast-moving targets #Seminar 5: In 1995, Ian Hacking published his influential paper on the looping effects of human kinds. In this, Hacking argued that, unlike animals, humans have the unique ability to engage with any characterization of themselves as a particular “kind” and thereby change the nature of the kind, making human kinds a dynamic phenomenon. In previous work, I have argued that in a medical context this looping effect can be a primarily epistemic phenomenon, as diagnostic profiles change when more is learned about a particular medical condition. In this talk, I will explore the implications of this idea if A.I. is implemented as a diagnostic tool. It is possible that A.I. could lead to epistemic and diagnostic looping effects if it is used to assist diagnosis. This may have a beneficial effect for patients, but there are also some possible risks that I will identify during this talk. Using A.I. for diagnosis and research may allow for clinicians to build a broader and more dynamic diagnostic profile, which might result in increased or earlier diagnosis for certain medical conditions. If machine learning could be deployed as a diagnostic and research tool at the same time, this might result in a broader understanding of the full symptom profile at the point of diagnosis. However, there are some potential risks to use of diagnostic A.I. It is possible that medical A.I. might increase, rather than decrease, diagnostic barriers. There is potential for looping effects in medical A.I. to exacerbate a pre-existing effect where atypical presentation of some medical conditions might be a barrier to accurate and timely diagnosis. This might create a different kind of looping effect, where diagnosis reinforces typical presentation as the only diagnostic profile for medical conditions. There are also concerns about the risks that confounding factors may be more problematic if clinician judgement is circumvented. Factors such as co-morbid conditions or alternative explanations already present a problem for doctors hoping to causally attribute a symptom to a disease, and the use of medical A.I. might serve to help with this problem or exacerbate it. I answer these concerns by arguing that the goal of artificial intelligence in the medical context ought to be learning rather than automation. With an epistemic goal at the heart of our deployment of A.I. we may be able to build a diagnostic tool that will allow for both research into disease presentation, and faster and more accurate diagnosis. Short bio: Georgina H. Mills is a PhD researcher at Tilburg university working in Philosophy of Science. #4 Yves St. James Aquino
13 September 2023 | 10h - 11h30 (Lisbon Time Zone)
Ethics of Explainable Artificial Intelligence in Medicine: Professional Perspectives #Seminar 4: Research on healthcare applications of machine learning (ML), a type of artificial intelligence (AI), has proliferated across clinical processes such as diagnosis and screening of diseases, allocation of healthcare resources, and developing personalised treatments. Given the increasingly complex processes behind ML systems, explainability has been considered a major caveat to its adoption in healthcare. This presentation reports the preliminary findings of a qualitative investigation of the perspectives of professional stakeholders (e.g. clinicians, data scientists, entrepreneurs and regulators) working on ML algorithms in diagnosis and screening. All participants were unified on the qualities that diagnosis should have: diagnosis should proceed in a way that enabled human oversight, promote critical thinking among clinicians, and ensure patient safety. However participants were divided on whether explanation was an important means to achieve this end. Broadly, some participants proposed ‘Outcome-assured’ diagnostic practices, while others proposed ‘Explanation-assured’ diagnostic practices, a distinction that applied either with or without the use of AI. ‘Outcome assured’ and ‘Explanation assured’ approaches differed in the significance attributed to explanation in part because they conceptualised explanation differently, not just in relation to what explanation is, but also in relation to the level of explanation and who might be owed an explanation. Short bio: Yves Saint James Aquino is a postdoctoral research fellow at the Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health and Society, University of Wollongong (Australia). #3 Amitabha Palmer19 July 2023 | 14h30 - 16h (Lisbon Time Zone)
From Theory to Practice: The Ethics of Deploying AI in Healthcare Settings #Seminar 3: Current national and international guidance for the ethical design and development of AI and robotics emphasizes ethical theory. Various governing and advisory bodies have generated sets of broad ethical principles which institutional decision makers are encouraged to apply to particular practical decisions. While much of this literature examines the ethics of designing and developing AI and robotics, medical institutions typically must make purchase and deployment decisions about technologies that have already been designed and developed. The primary problem facing medical institutions is not one of ethical design but of ethical deployment. The purpose of this paper is to develop a practical model by which medical institutions may make ethical deployment decisions about ready-made advanced technologies. Our slogan is “more process, less principles.” Ethically sound decision making requires that the process by which medical institutions make such decisions include participatory, deliberative, and conservative elements. We argue that our model preserves the strengths of existing frameworks, avoids their shortcomings, and delivers its own moral, practical, and epistemic advantages. Short bio: Amitabha (Ami) Palmer received his PhD in Philosophy from Bowling Green State University and is a clinical ethicist at the University of Texas MD Anderson Cancer Center. #2 Karim Zaouaq 21 June 2023 | 14h30 - 16h (Lisbon Time Zone)
Artificial intelligence in Medicine: Reflecting on Responsibility in Healthcare Delivery Seminar #2: With the development of artificial intelligence techniques, in particular robotics and algorithms, the challenge is to know who is responsible for what when any infringement of the most fundamental patient rights occurs. Mistakes, inaccuracies, and data breaches are some examples of the issues that can arise with the use of AI in the field of healthcare, which can have devastating consequences on patient’s health. In this context, the main principles of ethics, namely those of autonomy, beneficence, non-maleficence, and justice, can be used to promote responsible artificial intelligence in the medical field. But this is not enough to make medical practitioners accountable for their acts. Furthermore, the rapid evolution of the applications of artificial intelligence in medicine exceeds the capacity of legislators, which raises questions about the way in which the law must adapt to meet the new challenges posed by these technologies, the form of responsibility that should prevail, as well as to the role that ethics committees can play in this context. In order to address these questions, this presentation will reflect on the rights of patients, the duties of medical professionals, and the obligations of AI services suppliers. It will also attempt to highlight the need for algorithmic transparency, privacy, and protection of patient rights and autonomy, alongside balancing between performing AI service and the ability of professionals to take accurate decisions. Short bio: Karim Zaouaq is an Assistant professor of Public Law at the Faculty of Law, Sidi Mohamed Ben Abdellah University of Fez. #1 Steven S. Gouveia
17 May 2023 | 14h30 - 16h (Lisbon Time Zone)
Rebuilding Trust in Black-Box Algorithmic Decision-Making: the Case of Medicine Seminar #1: The influence of Artificial Intelligence in Medicine (AIM) is rapidly evolving in today's society. The basic assumption of these technologies is to produce more reliable, accurate, efficient and economical health practices than traditional medicine, based on purely human reasoning. However, most of these technologies are based on complex and multifaceted types of data and content and are therefore technically described as having a “black-box” structure: the healthcare professional will be able to understand the inputs of the system and then the outputs. However, they will never have access to what happens “inside” the system, making the medical process opaque (epistemically) and therefore dangerous (ethically), creating a “trust gap” in the relationship between patients and medical specialists. The purpose of this seminar will be to introduce three possible options to deal with the “trust gap” created by “AI Medicine”: (1) Restrict/restrict the development and use of AI Systems in Health so that the gap is not even created; (2) Accepting the benefits of black box AI systems and ignoring the consequences, even if this jeopardizes the doctor-patient relationship; (3) Create an “Explainable/Transparent” Artificial Intelligence that manages, on the one hand, to maintain the benefits of these technologies without, however, creating a gap of trust in the doctor-patient relationship. This ethical analysis will allow the diagnosis and identification of the main relevant aspects that must be considered in the use of an Artificial Intelligence in Medicine. Short bio: Steven S: Gouveia is a Research Fellow at the Mind, Langauge and Action Group, Institute of Philosophy, University of Porto, leading a project on the Ethics of Artificial Intelligence in Medicine. More info: stevensgouveia.weebly.com. |