Synthesis-based imaging-differentiation representation learning for multi-sequence 3D/4D MRI

Multi-sequence MRIs show different characteristics of protons within tissues, resulting in various image appearances of water and fat tissue when using particular settings of radiofrequency pulses and gradients.1 Clinicians generally utilize multi-sequence MRIs to reach a reliable diagnosis because a single sequence is often insufficient to describe lesions depending on the clinical tasks. For instance, the standard clinical scan protocol for glioblastoma includes T1-weighted (T1), contrast-enhanced (T1Gd), T2-weighted (T2), and T2-fluid-attenuated inversion recovery (Flair) sequences (Shukla et al., 2017). Dynamic contrast-enhanced (DCE) and diffusion-weighted imaging (DWI) are required in the current standard protocol for breast MRI imaging, which is, for example, used to predict the response to neoadjuvant chemotherapy (NAC) (Chen and Su, 2013). Because of their unique cellular microenvironment, lesions may be recognized and classified based on the different appearance combinations in multi-sequence MRI. Diagnosis of diseases from these combinations, therefore, relies on the summarization of multiple features, which requires an extensive clinical experience. The diagnostic reliability of multi-sequence MRIs may still be improved, as it is often unclear what the most important information is to use for lesion characterization.

In recent years, convolutional neural networks (CNN) have been widely used for processing multi-sequence MRI (Feng et al., 2020, Grøvik et al., 2020, Tang et al., 2020, Zhuang et al., 2022). Different from a single sequence pattern analysis, however, multi-sequence CNN-based research has the following challenges: (1) paired samples are small for training a large model; and (2) the redundant information between sequences drowns out the information with diagnostic value. Cropping out a region of interest (ROI) or providing a semantic segmentation mask of the target tissue is a solution to exclude redundant information (Feng et al., 2020). However, patch-based methods lack associations with global features, while certain crucial information may exist in the background. Segmentation masks require additional manual annotation, which is time-consuming and labor-intensive.

In this study, we address the problem of information redundancy across multiple MRI sequences from a novel perspective without any additional annotation. Since redundant information is the key to achieving sequence-to-sequence synthesis (Sharma and Hamarneh, 2019), we propose a sequence-to-sequence (Seq2Seq) generator to learn the redundant shared information across sequences. Based on this, we introduce the absolute error between generated and real image as the imaging-differentiation map, which contains non-synthesizable regions, to highlight the unique information for each sequence (see Fig. 1). Our contributions are three folds:

i

We propose a simple and efficient end-to-end model to transform an arbitrary given 3D/4D MRI sequence into a target sequence.

ii

We rank the importance of each sequence contributing to the MRI synthesis for non-inferiority clinical scanning selection based on our proposed new metric.

iii

We quantitatively estimate the amount of incremental information of each sequence compared to the remaining sequences, and further employ it to guide the multi-sequence learning in specific clinical tasks.

留言 (0)

沒有登入
gif