2020 Event (
Lima, Peru Virtual)
Invited speakers for the 2020 event will be announced soon
2019 Event (Shenzhen, China)
Bjoern Menze, PhD (Technical University of Munich) – The BRATS challenges: how accurate are algorithms? And how to find the one best one?
Professor Menze is working in the field of medical image computing. He develops algorithms that analyze biomedical images using functional and probabilistic models from machine learning, computer vision, and biophysics. The emphasis of this work is on applications in clinical neuroimaging and the personalized modeling of tumor growth. He has organized workshops on medical computer vision and on neuroimaging at MICCAI, NIPS and CVPR, served as a member of the program committee of MICCAI and is a member of the editorial board of the Medical Image Analysis journal.
Professor Menze studied physics in Heidelberg (Germany) and Uppsala (Sweden) and obtained a PhD in computer science from Heidelberg University in 2007. He subsequently moved to Boston (USA) where he worked as a postdoctoral researcher at Harvard University, Harvard Medical School, and MIT. This was followed by senior research position at Inria in Sophia-Antipolis (France) and at ETH Zurich (Switzerland). In 2013 he was the first scholar to have been appointed a Rudolf Moessbauer Professor at TUM. In 2019 he was visiting professor at Maastricht University. At TUM he heads the “Image-based Biomedical Modeling Group” at the Munich School of Bioengineering and the Center for Translational Cancer Research.
Annika Reinke (German Cancer Research Center) – Biomedical Image Analysis Challenges – A long journey
The importance of data science techniques in almost all fields of medicine is increasing at an enormous pace. While clinical trials are the state-of-the-art methods to assess the effect of new medication in a comparative manner, benchmarking in the field of image analysis is governed by challenges. Given that validation of algorithms has traditionally been performed on the individual researchers’ data sets, this development was a great step forward. On the other hand, the increasing scientific impact of challenges now puts huge responsibility on the shoulders of the challenge hosts that take care of the organization and design of such competitions. The performance of an algorithm on challenge data is essential, not only for the acceptance of a paper and its impact on the community, but also for the individuals’ scientific careers, and the potential that algorithms can be translated into clinical practice.
In this talk, I will discuss key challenges related to designing, executing, analyzing and reporting challenges and present new methods and tools to overcome them. Topics will include challenges related to generating consistent labels as well as dealing with uncertainties in reference data and rankings. A focus will further be on how to report and visualize challenge result.
Annika Reinke studied mathematics in medicine and life sciences at the University of Lübeck, Germany, with a focus on medical image analysis. In 2017 she joined the division of Computer Assisted Medical Interventions at the German Cancer Research Center (DKFZ) to adapt mathematical concepts to societally relevant topics, like scientific benchmarking and validation. Having published disruptive findings on biomedical image analysis challenges in Nature Communications, she is a founding member of the initiative of Biomedical Image Analysis ChallengeS (BIAS) aiming for transparent reporting of challenge design and results. She further serves as active member in the MICCAI board challenge working group to bring biomedical image analysis challenges to the next level of quality.
2018 Event (Granada, Spain)
Leo Joskowicz, PhD (Hebrew University of Jerusalem, Israel) – Quantifying the observer variability in volumetric structure segmentations: a large-scale study and a method
Segmentation of anatomical structures and pathologies in medical images is a fundamental technical problem in medical image processing. Producing accurate and reliable segmentations for clinical use is expensive, time consuming, and requires technical expertise. Often times, it is unclear what is the accuracy and quality of these segmentations because there is no reference ground truth to compare them to. In this talk we present: 1) a new framework for segmentation variability quantification based on segmentation priors and sensitivity analysis; 2) a method for estimating segmentation variability with no ground truth, and 3) a large-scale manual delineation study to quantify the actual segmentation variability in which 11 radiologists manually delineated the contours of liver tumors, lung tumors, kidneys, and brain hematomas in 3,193 CT slices from 18 representative CT scans. Our results show that segmentation variability spans a wide range depending on the structure of interest and that it can be accurately estimated independently of the segmentation method used and with no ground truth.
Joint work with: D. Cohen, Dr. N. Caplan and Prof. J. Sosna, Hadassah University Medical Center
Leo Joskowicz is a Professor at the School of Engineering and Computer Science at the Hebrew University of Jerusalem, Israel, where he conducts research in computer-assisted surgery, computer-aided mechanical design, computational geometry, and robotics since 1995. He obtained his PhD in Computer Science at the Courant Institute of Mathematical Sciences, New York University, in 1988 and was a Research Scientist was at the IBM T.J. Watson Research Center, Yorktown Heights, New York, USA. where he conducted research in intelligent computer-aided design and computer-aided orthopaedic surgery. From 2001 to 2009 he was the Director of the Leibniz Center for Research in Computer Science.
Tal Arbel, PhD (McGill University, Canada) –
Challenging Conventional Segmentation Evaluation Metrics in the context of Focal Pathology (e.g. lesion) Segmentation from Patient Images
In the context of automatic segmentation for medical images, overlap and boundary distance metrics (e.g. Hausdorff distance, DICE coefficients) define the standard for quantifying the performance an algorithm against structural delineations by experts. These metrics, adopted from computer vision, are well-suited to the context of healthy structure segmentation or segmentation of a single pathological structure, as they typically adhere to the underlying assumptions that: 1) the structure in question exists and makes up a substantial portion of the region of interest, and 2) variability in both ground truth and in automatically generated delineations mainly consist of differences in voxel assignments.
In this talk, we will illustrate a number of challenges in applying these segmentation evaluation metrics to delineating multiple pathological structures (e.g. lesions, tumours) in patient images, where clinical objectives can be substantially different, and associated assumptions violated. We focus on the illustrative context of Multiple Sclerosis lesion segmentation, where challenges include, but are not limited to: 1) inter/intra-patient lesion variability in terms of size (spanning from a few voxels to over one hundred), count, position and shape; 2) inter-rater variability including discrepancies regarding lesion existence; 3) clinical objectives which require detection and segmentation of *all* lesions in order to estimate treatment efficacy (in clinical trials and in the clinic). We provide illustrative examples of challenges and requirements placed on the main industrial clinical trial analysis system used in the development of the majority of new MS treatments currently available worldwide, as well as the process and resulting lesion labels provided by their trained neuro-radiologists.
Joint work with Dr. Arnold, neurologist at the Montreal Neurological Institute and President of NeuroRx Research.
Tal Arbel is a professor in the Department of Electrical & Computer Engineering and Director of the Probabilistic Vision Group and Medical Imaging Lab in the Centre for Intelligent Machines at McGill University, Montreal, Canada. Her research focuses on the development of probabilistic and machine learning methods in computer vision for medical image analysis, for a wide range of applications in neurology and neurosurgery. She has particular extensive expertise in developing probabilistic graphical models for brain tumour/lesion detection and segmentation. Recent work is focused on modeling uncertainty in deep learning networks for medical image analysis and on developing machine learning methods for the automatic identification of biomarkers predictive of future neurodegenerative disease progression. She has co-organized a number of major international conferences, including serving as co-organizer and satellite events chair for MICCAI 2017, and area chair/program committee member for CVPR and MICCAI. She is currently an Associate Editor for IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) and the Journal of Computer Vision and Image Understanding (CVIU).
Previously we had announced a talk by Dr. Elizabeth Krupinski, which unfortunately had to be cancelled due to scheduling problems.
2017 Event (Quebec City, Canada)
Danna Gurari – Mixing Crowds, Computers, and Experts for Scalable Annotation of Biomedical Images
Biomedical researchers are running image-based studies to systematically study fundamental biological processes. The larger goal of this effort is to contribute to discoveries and innovations that, for example, address society’s health care problems or lead to new bio-inspired technology. However, the key bottlenecks for extracting the desired information from images lie in unreliable annotation from algorithms and costly annotation by experts, especially at scale. Given the rise of crowdsourcing, I will discuss how we can utilize online crowds to better annotate biomedical images. I will present research on demarcating objects in images (segmentation), a critical and time-consuming precursor to many downstream applications. I will begin the talk with a detailed analysis of the relative strengths and weaknesses of three different image segmentation approaches: by experts, by crowd workers, and by algorithms. Then, I will describe a hybrid system design for intelligently distributing segmentation efforts between algorithms and crowds. Results show how to efficiently leverage crowd and algorithm efforts in order to optimize cost/quality trade-offs as well as how to produce segmentations comparable to those created by experts.
Danna Gurari is currently an Assistant Professor at University of Texas at Austin School of Information. She completed a postdoctoral fellowship in the University of Texas at Austin Computer Science department under the supervision of Dr. Kristen Grauman and her PhD at Boston University in the Image and Video Computing group under the supervision of Dr. Margrit Betke. Her research interests span computer vision, human computation/crowdsourcing, medical/biomedical image analysis, and applied machine learning. In 2007-2010, Danna worked at Boulder Imaging building custom, high performance, multi-camera recording and analysis systems for military, industrial, and academic applications. From 2005-2007, she worked at Raytheon developing software for satellite systems. Danna earned her BS in Biomedical Engineering and MS in Computer Science from Washington University in St. Louis in 2005, with her thesis on ultrasound imaging. Danna was awarded the 2017 Honorable Mention Award at CHI, 2015 Researcher Excellence Award from the Boston University computer science department, 2014 Best Paper Award for Innovative Idea at MICCAI IMIC, and 2013 Best Paper Award at WACV.
Tanveer Syeda-Mahmood – Challenges of large-scale data annotations for building cognitive medical assistants
Dr. Tanveer Syeda-Mahmood is an IBM Fellow and Chief Scientist/overall lead for the Medical Sieve Radiology Grand Challenge project in IBM Research, Almaden. Medical Sieve is an exploratory research project with global participation from many IBM Research Labs around the world including Almaden Labs in San Jose, CA, Haifa Research Labs in Israel and Melbourne Research Lab in Australia. The goal of this project is to develop automated radiology and cardiology assistants of the future that help clinicians in their decision making.
Dr. Syeda-Mahmood graduated from the MIT AI Lab in 1993 with a Ph.D in Computer Science. Prior to IBM, she worked as a Research Staff Member at Xerox Webster Research Center, Webster, NY. She joined IBM Almaden Research Center in 1998. Prior to coming to IBM, Dr. Syeda-Mahmood led the image indexing program at Xerox Research and was one of the early originators of the field of content-based image and video retrieval. Currently, she is working on applications of content-based retrieval in healthcare and medical imaging. Over the past 30 years, her research interests have been in a variety of areas relating to artificial intelligence including computer vision, image and video databases, medical image analysis, bioinformatics, signal processing, document analysis, and distributed computing frameworks.
Emanuele Trucco – Navigating the perilous waters of validation: the case of retinal image analysis
The explosion of algorithms to process medical images should draw increasing attention to validation methodologies, i.e. how to declare that a software tool actually works. In medical image analysis validation is complicated by several issues linked to the nature of the data, acquisition protocols, operators and devices, availability of annotated data, characteristics of the annotations (e.g. protocols, type of annotation) and others. Although frameworks for validation have been proposed (but not universally adopted), substantial questions remain open, including the overarching one: how much can measurements (taken in a general sense) be trusted for subsequent decision making, be it statistical analysis, diagnosis etc? This talk attempts to capture the main issues behind validation, based on the 10+ years of interdisciplinary experience of the VAMPIRE group on retinal biomarkers, including crowdsourcing. The talk also aims to raise attention and interest on this crucial field of medical and healthcare data processing.
Emanuele (Manuel) Trucco, MSc, PhD, FRSA, FIAPR, is the NRP Chair of Computational Vision in Computing, School of Science and Engineering, at the University of Dundee, and an Honorary Clinical Researcher of NHS Tayside. He has been active since 1984 in computer vision, and since 2002 in medical image analysis. He has published more than 250 refereed papers and 2 textbooks (one of which an international standard with 2,793 citations, Google Scholar 25 Oct 2016). He is director of VAMPIRE (Vessel Assessment and Measurement Platform for Images of the Retina), an international research initiative led by the Universities of Dundee and Edinburgh (T MacGillivray, tech director). VAMPIRE develops software tools for efficient data and image analysis, especially multi-modal retinal images. VAMPIRE has been used in biomarker studies on cardiovascular risk, stroke, dementia, cognitive performance, neurodegenerative diseases, genetics and more, and has grown strong collaborations with clinical departments around the world. International collaborators include UCLA, Harvard, Tufts, A*STAR Singapore, the Chinese Academy of Science (Ningbo), INRIA, Charite’ Berlin, Univ of Padova and many more. Further, recent projects have focused on robotic hydrocolonoscopy and whole-body MR angiographic data. Current industrial collaborations include Toshiba Medical, OPTOS plc, and Epipole retinal cameras. Manuel is particularly interested in validation, the reliability of retinal measurements, and its consequences for statistical inference.
2016 Event (Athens, Greece)
Marco Loog – Dealing with Weakly and Partially Annotated Data
Obtaining sufficient annotated data remains one of the main obstacles in successfully deploying supervised learning in biomedical imaging, even in this era of big data. I give a brief, and probably fairly rapid, overview of various machine learning and pattern recognition techniques that aim to exploit data that is only weakly or partially labeled so to potentially reduce the need for annotations. I will cover approaches like semi-supervised, multiple instance, transfer, and active learning and set out to identify some core challenges in these areas.
Marco Loog received an M.Sc. degree in mathematics from Utrecht University and a Ph.D. degree from the Image Sciences Institute. Subsequently, he moved to Copenhagen where he acted as an assistant and, eventually, associate professor, next to which he worked at Nordic Bioscience. Currently, Marco is affiliated to the Pattern Recognition Laboratory at Delft University of Technology. He is also an honorary professor in pattern recognition at the University of Copenhagen. Marco’s principal research interest is with supervised pattern recognition in all sorts of shapes and sizes.
Pascal Fua – Challenges of large-scale data annotations for building cognitive medical assistants
Electron and Light Microscopy imaging can now deliver high-quality image stacks of neural structures. However, the amount of human annotation effort required to analyze them remains a major bottleneck. While Machine Learning algorithms can be used to help automate this process, they require training data, which is time-consuming to obtain manually, especially in image stacks. Furthermore, due to changing experimental conditions, successive stacks often exhibit differences that are severe enough to make it difficult to use a classifier trained for a specific one on another. This means that this tedious annotation process has to be repeated for each new stack.
In this talk, we will present domain adaptation algorithms that address this issue by effectively leveraging labeled examples across different acquisitions. This drastically reduces the annotation requirements. Our approach can handle complex, non-linear image feature transformations and scales to large microscopy datasets that often involve high-dimensional feature spaces and large 3D data volumes.
Pascal Fua received an engineering degree from Ecole Polytechnique, Paris, in 1984 and the Ph.D. degree in Computer Science from the University of Orsay in 1989. He joined EPFL (Swiss Federal Institute of Technology) in 1996 where he is now a Professor in the School of Computer and Communication Science. Before that, he worked at SRI International and at INRIA Sophia-Antipolis as a Computer Scientist. His research interests include shape modeling and motion recovery from images, analysis of microscopy images, and Augmented Reality. He has (co)authored over 300 publications in refereed journals and conferences. He is an IEEE Fellow and has been an Associate Editor of IEEE journal Transactions for Pattern Analysis and Machine Intelligence. He often serves as program committee member, area chair, and program chair of major vision conferences and has cofounded two spinoff companies.