16th International Conference on
Image Synthesis and its Growing Role in Medical Imaging
William B. Kouwenhoven Professor
Electrical and Computer Engineering
Johns Hopkins University
Image synthesis methods take acquired images and produce images with contrasts, modalities, resolutions, or noise levels that were not imaged. These methods are proving invaluable in medical image analysis and may someday find applications in the clinic. A typical application is image imputation, where missing images are synthesized, typically for use in a standard image processing pipeline. Other uses include noise reduction, resolution enhancement, and artifact removal. CT images can be synthesized from MR images for various applications including for use in attenuation correction in PET-MR scanners. Intensity normalization in MR--a significant problem because MR does not have a standardized intensity scale--may well be solved by image synthesis. In fact, image synthesis is proving to be a pragmatic solution to the problem of quantitative MRI and for the quest for reliable imaging biomarkers in precision medicine. Different image synthesis approaches will be described in this talk, starting with historically important methods and ending with the most modern approaches that are under development today. Different applications will be described to illustrate the great potential for image synthesis.
Focused Topic: Advanced Tools for Ultrasound Imaging
Exploiting data sparsity and machine learning in medical imaging
Donald Biggar Willet Professor in Engineering
Department of Bioengineering
University of Illinois at Urbana-Champaign
We frequently find in medical imaging that by decomposing recorded data into an appropriate basis, the information related to a specific clinical task is found in a compact subspace. Sampling sparse data appropriately enables high frame-rate imaging with minimal loss of image quality. It also enables efficient implementation of machine-learning and other analysis techniques designed to enhance the diagnostic performance of that modality.
These concepts are being applied toward the development of a new power-Doppler imaging method using data from commercial ultrasound instruments. Our method significantly increases blood-signal sensitivity and specificity for slow, spatially disorganized patterns of blood flow as is characteristic of peripheral perfusion. We arrange the recorded echo data into high-dimensional arrays that are decomposed into basis sets to effectively separate strong tissue echo signals from the much weaker blood signals of perfusion. That is, we expand dimensionality of the recorded data to capture fully the perfusion subspace before reducing dimensionality for clutter filtering and image rendering. This combination of pulse sampling and clutter filtering enhances peripheral perfusion images such that injectable contrast media are no longer required. In preclinical mouse studies, we find our methods significantly enhance the effectiveness of sonography at assessing the time course of revascularization in an ischemic hindlimb.
Similar insights provide opportunities for elasticity image reconstruction based on combinations of finite-element modeling and cooperative neural network combinations in a machine-learning algorithm. These ideas are redefining sonography as a computational imaging modality.
Knowledge Discovery: Can We Do Better Than Deep Neural Networks?
Ryerson Multimedia Research Laboratory
Ryerson University, Toronto
Knowledge discovery plays a key role in the success of machine learning for visual analysis and recognition. Knowledge discovery consists of key-point detection in the visual scene, leading to effective feature (descriptor) generation; and feature coding which transforms the features into a more effective (or optimal) representation.
The talk starts with an overview on state-of-the-art in knowledge discovery. Then we present a recently conceived idea for the design of a effective framework which consists of the following components: 1) The introduction of SCK, a universal key-point detector built upon the theory of sparse coding. SCK can handle any visual structures (blobs, corners, junctions, and more) and has been analytically proven to be invariant to changes in illumination and spatial transformations. Results show SCK outperforms all the hand-crafted detectors compared such as SIFT, Harris Corner, SFOP, etc, and the more recent deep learning based detectors, setting the stage for extracting high quality visual features. 2) The proposal of a family of mathematically inspired feature coding methods which optimizes information representation, leading to superior performance in pattern recognition. It has been demonstrated that such a simple but mathematically vigorous approach beats the deep learning architecture with several hundreds or even thousands of layers. This analytically vigorous approach has potentially promised a simple, effective and academically more relevant alternative to the pure deep neural nets of try-and- error architectures.
ICIAR - International Conference on Image Analysis and Recognition
Send your comments to email@example.com