+353-1-416-8900REST OF WORLD
+44-20-3973-8888REST OF WORLD
1-917-300-0470EAST COAST U.S
1-800-526-8630U.S. (TOLL FREE)

Trustworthy AI in Medical Imaging. The MICCAI Society book Series

  • Book

  • December 2024
  • Elsevier Science and Technology
  • ID: 5978235

Trustworthy AI in Medical Imaging brings together scientific researchers, medical experts, and industry partners working in the field of trustworthiness, bridging the gap between AI research and concrete medical applications and making it a learning resource for undergraduates, masters students, and researchers in AI for medical imaging applications.
The book will help readers acquire the basic notions of AI trustworthiness and understand its concrete application in medical imaging, identify pain points and solutions to enhance trustworthiness in medical imaging applications, understand current limitations and perspectives of trustworthy AI in medical imaging, and identify novel research directions.

Although the problem of trustworthiness in AI is actively researched in different disciplines, the adoption and implementation of trustworthy AI principles in real-world scenarios is still at its infancy. This is particularly true in medical imaging where guidelines and standards for trustworthiness are critical for the successful deployment in clinical practice. After setting out the technical and clinical challenges of AI trustworthiness, the book gives a concise overview of the basic concepts before presenting state-of-the-art methods for solving these challenges.

Please Note: This is an On Demand product, delivery may take up to 11 working days after payment has been received.

Table of Contents

Preface

Section 1- Preliminaries
  1. Introduction to Trustworthy AI for Medical Imaging & Lecture Plan
  2. The fundamentals of AI ethics in Medical Imaging

Section 2- Robustness

3. Machine Learning Robustness: A Primer
4. Navigating the Unknown: Out-of-Distribution Detection for Medical Imaging
5. From Out-of-Distribution Detection and Uncertainty Quantification to Quality Control
6. Domain shift, Domain Adaptation and Generalization

Section 3 Validation, Transparency and Reproducibility

7. Fundamentals on Transparency, Reproducibility and Validation
8. Reproducibility in Medical Image Computing
9. Collaborative Validation and Performance Assessment in Medical Imaging Applications
10. Challenges as a Framework for Trustworthy AI

Section 4 Bias and Fairness

11. Bias and Fairness
12. Open Challenges on Fairness of Artificial Intelligence in Medical Imaging Applications

Section 5 Explainability, Interpretability and Causality

13. Fundamentals on Explainable and Interpretable Artificial Intelligence Models
14. Causality: Fundamental Principles and Tools
15. Interpretable AI for Medical Image Analysis: Methods, Evaluation and Clinical Considerations
16. Explainable AI for Medical Image Analysis
17. Causal Reasoning in Medical Imaging

Section 6 Privacy-preserving ML

18. Fundamentals of Privacy-Preserving and Secure Machine Learning
19. Differential Privacy in Medical Imaging Applications

Section 7 Collaborative Learning

20. Fundamentals on Collaborative Learning
21. Large-scale Collaborative Studies in Medical Imaging through Meta Analyses
22. Promises and Open Challenges for Translating Federated learning in Hospital Environments

Section 8 Beyond the Technical Aspects

23. Stakeholder Engagement: The Path to Trustworthy AI in Healthcare

Authors

Marco Lorenzi Tenured Research Scientist, EPIONE team of Inria Sophia Antipolis and Universit� C�te d'Azur, Cedex, France.

Marco Lorenzi is a tenured research scientist at the Inria Center of University C�te d'Azur (France), and junior chair holder at the Interdisciplinary Institute for Artificial Intelligence 3IA C�te d'Azur. He is also a visiting Senior Lecturer at the School of Biomedical Engineering & Imaging Sciences at King's College London. His research focuses on developing statistical learning methods to model heterogeneous and secured data in biomedical applications. He is the founder and scientific responsible for the open-source federated learning platform Fed-BioMed.

Maria A Zuluaga Assistant Professor, Data Science department, EURECOM, Biot, France. Dr Zuluaga is an assistant professor in the Data Science department at EURECOM. She holds a junior chair at the 3IA Institute C�te d'Azur and is a visiting Senior Lecturer within the School of Biomedical Engineering & Imaging Sciences at King's College London.

Her current research focuses on the development of machine learning techniques that can be safely deployed in high risk domains, such as healthcare, by addressing data complexity, low tolerance to errors and poor reproducibility.