Artificial Intelligence–Based Methods for Integrating Local and Global Features for Brain Cancer Imaging: Scoping Review


IntroductionBackground

Brain cancer is typically characterized by a brain tumor. A brain tumor is a mass or development of aberrant brain cells. The signs and symptoms of a brain tumor vary widely and are determined by the size, location, and rate of growth of the brain tumor. Brain tumors can originate in the brain (primary brain tumors) or move from other body regions to the brain (secondary or metastatic brain tumors). In general, studying brain cancer is challenging given the highly complex anatomy of the human brain, where several sections are responsible for various nervous system processes [].

Medical imaging technologies for studying the brain are rapidly advancing. Therefore, it is critical to provide tools to extract information from brain image data such that they may aid in automatic or semiautomatic computer-aided diagnosis of brain cancer. Artificial intelligence (AI) techniques based on modern machine learning and deep learning models enable computers to make data-driven predictions using massive amounts of data. These techniques have a wide range of applications, many of which can be customized to extract useful information from medical images [-].

Among AI techniques developed for brain cancer applications, architectures based on convolutional neural networks (CNNs) have dominated the research on brain cancer diagnosis and classification. For example, UNet (an encoder-decoder CNN architecture) and its variants [,] are popular for brain tumor segmentation tasks. However, CNNs are known to be effective in extracting only local dependencies in the input image data, which is mainly attributed to the localized receptive field. Compared with CNNs, attention-based transformer models (transformers) [] are good at capturing long-range dependencies. Given their ability to learn long-range dependencies, transformers form the backbone of most state-of-the-art models in the natural language processing domain [].

For image classification tasks, Dosovitskiy et al [] proposed the computer vision variants of the transformer architecture, typically known as vision transformer (ViT). The concept of attention was applied to images by representing them as a sequential combination of 16×16-pixel patches. The image patches were processed in a way similar to tokens (words) in natural language processing []. The sections (with positional embeddings) are ordered. The embeddings are vectors that can be learned. Each piece is organized in a straight line and multiplied by the embedding matrix. The position embedding result is passed to the transformer encoder.

Given the potential demonstrated by transformer-based approaches for computer vision tasks, transformers have quickly penetrated the field of medical imaging. For example, some studies [-] have used them on computed tomography scans and x-ray images of the lungs to classify COVID-19 and pneumonia. Similarly, Zhang and Zhang [] and Xie et al [] used ViT for medical image segmentation, and He et al [] used ViT for brain age estimation. With the recent developments of ViTs in computer vision applications, there has been a growing interest in developing ViT-based architectures for cancer imaging applications. ViT can also aid in the diagnosis and prognosis of other types of cancers. For example, Chen et al [] showed the scaling of ViTs to large whole-slide imaging for 33 different cancer types. The benchmarking results demonstrate that the transformer-based architecture with hierarchical pretraining outperforms the existing cancer subtyping and survival prediction methods, indicating its effectiveness in capturing the hierarchical phenotypic structure in tumor microenvironments.

Accordingly, many recent efforts have been reported on the developments of ViT architectures to make progress in brain cancer applications. With the growing interest in developing ViT-based methods for brain cancer imaging, there is a dire need to review the recent developments and identify the key challenges. To the best of our knowledge, no study (review) has reported the different ViT architectures for brain cancer imaging and analyzed how ViT complements CNNs in brain cancer diagnosis, classification, grading, and brain tumor segmentation.

A few review and survey articles that are relevant to our work are by Parvaiz et al [], Magadza and Viriri [], Akinyelu et al [], He et al [], and Biratu et al []. Among these, Magadza and Viriri [] and Biratu et al [] have surveyed the articles that used deep learning and machine learning methods for brain tumor segmentation. In addition, they covered papers until mid-2021 only and did not cover studies on ViT. Similarly, the survey by Akinyelu et al [] has a broad scope, as it covered different methods including CNNs, capsule networks, and ViT used for brain tumor segmentation. In addition, it included only 5 studies on ViT, of which 4 were from 2022. Reviews by Parvaiz et al [], He et al [], and Shamshad et al [] covered the applications of ViT in medical imaging; however, the scope of all these reviews is broad, as they included different medical imaging applications. In addition, they conducted a descriptive study of ViT for various medical imaging modalities. Similarly, many relevant recent studies on ViT-based architectures have been left out, as both the reviews [,] were released in early 2022. Nevertheless, the aforementioned reviews could be of interest to the readers. compares our review with the previously published review articles.

Compared with other existing reviews on ViTs and medical imaging, our study is specific to brain cancer applications and covers the most recent developments. This review provides quantitative insights into the computational complexity and the required computational resources to implement ViT architectures for brain cancer imaging. Such insights will be helpful for the researchers to choose hardware resources and graphics processing units (GPUs). This review identifies the research challenges that are specific to ViT-based approaches in brain cancer imaging applications. These discussions will raise awareness for the related research directions. This review identifies the available public data sets and highlights the need for additional data to motivate the community to develop more publicly available data sets for brain cancer research. Furthermore, this review follows a narrative synthesis approach that would help the readers follow the text quickly.

Table 1. Comparison with similar review articles.Review titleMonth and yearScope and coverageComparison with our reviewVision transformers in Medical Computer Vision—A Contemplative Retrospection []March 2022The title is specific to ViTa; however, the full text has a very broad scope with discussions on deep learning, CNNsb, and ViT.
It covers different applications in medical computer vision, including the classification of disease, segmentation of tissues, registration tasks in medical images, and image-to-text applications.
It does not provide much text on brain cancer applications of ViT.
Many recent studies of 2022 are left out as the preprint was released in March 2022.
It does not provide a comparative study on the computational complexity of ViT-based models.
Our review is also specific to ViT.
Our review is specific to brain cancer applications.
Our review includes more recent studies on ViT.
Our review provides a comparative study of the computational complexity of the ViT-based models.
Transformers in medical imaging: A survey []January 2022It is specific to ViT.
It has a broad scope as different medical imaging applications are included.
It does not include many recent studies on ViT for brain cancer imaging (as the preprint was released in January 2022).
Our review is also specific to ViT.
Our review is specific to brain cancer applications.
Our review includes more recent studies on ViT.
Transformers in Medical Image Analysis: A Review []August 2022It is specific to ViT.
It has broad scope as different medical imaging applications are included.
It provides a descriptive review of ViT techniques for different medical imaging modalities.
It does not provide a quantitative analysis of the computational complexity of ViT-based methods.
Our review is also specific to ViT.
Our review is specific to brain cancer applications.
Our review provides a comparative study of the computational complexity of the ViT-based models.
Brain Tumor Diagnosis Using Machine Learning, Convolutional Neural Networks, Capsule Neural Networks and Vision Transformers, Applied to MRIc: A Survey []July 2022It covers applications specific to brain tumor segmentation.
It has a broad scope, as it includes studies on CNNs, capsule networks, and ViT.
It includes only 5 studies on ViT.
Many recent studies are left out as it covers only 4 studies from 2022.
It provides no quantitative analysis of computational complexity.
Our review is also specific to brain cancer and brain tumor.
Our review covers more recent studies.
Our review includes 22 studies on ViT for brain cancer application.
Our review provides a comparative study of the computational complexity of the ViT-based models.
A survey of brain tumor segmentation and classification algorithms []September 2021It has a very broad scope as it covers traditional machine learning and deep learning methods.
It covers studies until early 2021 only.
Our review is specific to ViT.
Our review covers more recent studies.
Deep learning for brain tumor segmentation: a survey of state-of-the-art []January 2021It has a broad scope as it covers different deep learning methods.
Many recent studies are left out.
Our review is specific to ViT.
Our review covers more recent studies.

aViT: vision transformer.

bCNN: convolutional neural network.

cMRI: magnetic resonance imaging.

Research Problem

The popularity of transformer-based approaches for medical imaging has been increasing. Many recent studies have developed new transformer-based methods for brain cancer application. Hence, there is a need to review the recent studies on how transformer-based approaches have contributed to brain cancer diagnosis, grading, and tumor segmentation. In this study, we present a review of the advancements in ViTs for brain cancer imaging applications. We present the recent ViT architectures for brain cancer diagnosis and classification, identify the key pipelines for combining ViT with CNNs, and highlight the key challenges and issues in developing ViT-based AI techniques for brain cancer imaging. More specifically, this review aims to identify the common techniques that were developed to use ViT for brain tumor segmentation and whether ViTs were effective in enhancing the segmentation performance. This review also identifies the common modality of brain imaging data used for training ViT for brain tumor segmentation. Moreover, this review identifies the commonly used data sets for the brain tumor that contributed to developing ViT-based models. Finally, this review presents the key challenges that the researchers faced in developing ViT-based approaches for brain tumor segmentation. We believe that this review will help researchers in deep learning and medical imaging interdisciplinary fields to understand the recent developments on the topic. Furthermore, it will appeal students and researchers interested to know about the advancements in brain cancer imaging.


MethodsOverview

We performed a literature search in famous scientific databases and conducted a scoping review following the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines []. provides the PRISMA-ScR checklist. The literature search and the study selection were performed using the steps described in the following subsections.

Search StrategySearch Sources

We searched for relevant literature in 4 databases: PubMed, Scopus, IEEE Xplore, and Google Scholar. The search was performed between July 31 and August 1, 2022. For Google Scholar, we retained the first 300 results, as the results beyond 300 lacked relevance to the topic of this review. We also screened the reference lists of the included studies to retrieve any additional studies that fulfilled the inclusion criteria.

Search Terms

We defined the key terms for the search by referring to the available literature and by a discussion with domain experts. The search terms comprised the terms corresponding to the intervention (ie, transformers) and the target application (ie, cancer and tumor). The search strings are provided in .

Search Eligibility Criteria

Our search focused on studies that reported developing ViT-based architectures for brain tumor segmentation, brain cancer diagnosis, or prognosis. We considered studies conducted between January 2017 and July 2022. We included studies that used ViT with or without combining other deep learning architectures, such as CNN, and excluded studies that used only CNN. We excluded studies that reported the diagnosis of other cancer types, such as lung cancer or colorectal cancer, and did not report the use of the model for any form of brain cancer. We included studies that used any type of brain cancer data, including brain magnetic resonance imaging (MRI) and histopathology image data. We included studies published as peer-reviewed articles or conference proceedings and excluded nonpeer-reviewed articles (preprints), short notes, editorial reviews, abstracts, and letters to the editor. We excluded survey and review articles. We did not impose any additional restrictions on the country of publication and the performance or accuracy of the ViT used in the studies. Finally, for practical reasons, we included studies published only in English.

Study Selection

Two reviewers, HA and RQ, independently screened the titles and abstracts of the studies retrieved in the search process. In abstract screening, the reviewers excluded the studies that did not fulfill the inclusion criteria. The studies retained after the title and abstract were included for full-text reading. At this stage, disagreements between the 2 reviewers (HA and RQ) were analyzed and resolved through mutual discussion. Finally, the study selection was verified by a third reviewer.

Data Extraction

We designed a custom-built data extraction sheet. presents the different fields of information in the data extraction sheet. Initially, we pilot-tested the fields in the extraction sheet by extracting data from 7 relevant studies. Two reviewers (HA and RQ) extracted the data from the included studies. The critical information extracted was the application of ViT, the architectures of ViT, the complexity of the architectures used, the pipeline for combining ViT and CNNs, the data sets and their relevant features, and the open research questions identified in the studies. The 2 reviewers resolved disagreements through mutual discussions and revisiting the full text of the relevant study where needed.

Data Synthesis

We followed a narrative approach to synthesize the data after data extraction. We categorized the included studies based on applications, such as tumor segmentation, grading, or prognosis. We also organized the studies based on data type, such as public versus private data and 2D versus 3D data. We also identified the modality of the data used in the included studies, such as MRI or pathology images. Next, we identified the most frequently used architectures of ViT and the key pipelines for incorporating ViT in cascade or parallel connections with CNN models. We also classified the included studies based on the metrics used to evaluate the performances. Finally, if available, we identified the public code repositories for the model implementation as reported in the included studies.


ResultsSearch Results

A total of 736 studies were retrieved. Of these, we removed 224 duplicates. After the title, abstract, and metadata screening, we removed 488 studies that did not fulfill the inclusion criteria and retained 24 studies. In the full-text screening, we removed 2 studies. Overall, 22 studies were included in this review. We did not find any additional studies by forward and backward reference checking. shows the flowchart for the study selection process. shows a list of all the included studies.

Figure 1. The PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) flowchart for the selection of the included studies. ViT: vision transformer. Demographics of the Included Studies

Among the 22 included studies, 9 (41%) were published in peer-reviewed journals, whereas 13 (59%) were published as conference or workshop proceedings. Of the 22 studies, 19 (86%) were published in 2022, whereas only 3 (14%) were published in 2021. No studies published before 2021 were found. Among the studies published in 2022, one-third (6/22, 27%) were published in July. The included studies were published by authors from 6 different countries (based on first-author affiliation). Among the 22 studies, almost half (n=10, 45%) were published by authors from China and 5 (23%) were published by authors from the United States. Authors from the United Kingdom and India published 3 and 2 studies, respectively, whereas both South Korea and Vietnam published 1 study each. shows a summary of the year-wise and month-wise studies. shows a summary of the country-wise demographics of the included studies. summarizes the demographics of the included studies. shows a visualization for the mapping of the included studies with year, month, and country of publication.

Table 2. Demographics of the included studies (N=22).
Studies, n (%)Year and month
2022

January2 (9)

February2 (9)

March1 (4.5)

April5 (23)

May1 (4.5)

June2 (9)

July6 (27)
2021

August1 (4.5)

September1 (4.5)

November1 (4.5)Countries
China10 (45)
United States5 (23)
United Kingdom3 (14)
India2 (9)
South Korea1 (4.5)
Vietnam1 (4.5)Type of publication
Conference13 (59)
Journal9 (41)Figure 2. Mapping of the included studies with year, month, and country. S1 through S22 are the included studies. Main Tasks Addressed in the Studies

Among the included studies, 19 (86%) of the 22 studies addressed the task of segmentation [-], and 1 study [] reported survival prediction. One study [] reported the detection of lesions. One study [] performed grading of the tumor. In addition, 1 study [] performed the diagnosis of multiple sclerosis, and 1 study [] performed reconstruction of fast MRI. One study [] also performed isocitrate dehydrogenase (IDH) genotyping in addition to segmentation. shows a summary of the key characteristics and tasks addressed in the included studies.

Table 3. Summary of key characteristics of the included studies.ReferenceYear3D model2D modelImage modalityPurposeTransformer nameData source[]2022YesYesMRIaSegmentationSWINb transformerPublic[]2022YesNoMRISegmentationSWIN transformerPublic[]2022YesNoMRISegmentationSWIN transformerPublic[]2022YesNoMRISegmentationNot availablePublic[]2022YesNoMRISegmentationSegtransvaePublic[]2021YesNoMRISegmentationTransBTSPublic[]2021YesYesMRISegmentationSegTranPublic[]2022YesNoMRISegmentationSWIN transformerPublic[]2022YesNoMRISegmentationTransUNetPublic[]2022YesNoMRISegmentationNot availablePublic[]2022YesNoMRISegmentationTransBTSPublic[]2022YesNoMRISegmentationUNETRcPublic[]2022YesNoMRISegmentationSWIN transformerPublic[]2021YesNoMRISegmentationNot availablePublic[]2022NoYesMRISegmentationNot availablePublic[]2022NoYesMRISegmentationNot availablePublic+private[]2022YesYesMRISegmentation and diagnosisAutoregressive transformerPublic[]2022YesNoMRISegmentation and gradingNot availablePublic[]2022NoYesMRISegmentation and reconstructionSWIN transformerPublic[]2022NoYesMRISPdNot availablePublic[]2022NoYesMRIDetectionNot availablePrivate[]2022NoYesPathologyGradingNot availablePrivate

aMRI: magnetic resonance imaging.

bSWIN: Shifted Window.

cUNETR: UNet Transformer.

dSP: survival prediction.

Key Architectures of the ViT for Brain Tumor Segmentation

In the included studies, ViTs were combined with different variants of a CNN to improve the overall performance of brain tumor segmentation. Shifted Window (SWIN) transformer [] has recently become a popular choice for image-based classification tasks. Therefore, the most recent studies [-,,,] reported using SWIN transformers in their models. Some of the studies [,,,,,] incorporated the transformers module within the encoder or decoder or both modules of the UNet-like architectures. Some studies [-,,,] used the transformer module as a bottleneck between the encoder and decoder modules of UNet-like architectures. One study [] explored both cascade and parallel combinations of the transformer module with CNNs. One study [] used the transformer module in parallel combination with a residual network (a CNN). One study [] implemented the training of transformers using federated learning over distributed data for 22 institutions.

Complexity of the Models Used in the Studies

The included studies presented transformer-based models with different computational complexity. Of these, Fidon et al [] used the TransUNet model, which has 116.7 million parameters, whereas the UNETR model proposed by Hatamizadeh et al [] has 92.58 million parameters. The SegTran model proposed by Li et al [] has 93.1 million parameters. Compared with the UNETR [], the recent variant, that is, SWIN UNETR [], has 61.98 million parameters. The Segtransvae [] has 44.7 million parameters. The BTSWIN-UNet model [] has 35.6 million parameters that are higher than other SWIN transformer–based models but much smaller than the UNETR. For example, the SWIN transformer–based models Trans-BTS and SWIN-UNet have 30.6 million and 27.1 million parameters, respectively, on the same data, but UNETR has 102.8 million parameters on the same data. The TransConver proposed by Liang et al [] has 9 million parameters. The SWINMR [] has 11.40 million parameters for reconstruction. Other studies [,,,,,-,-] did not provide details regarding the computational complexity of the models. Some studies have reported a different number of parameters for other models used on their data. We believe that these minor differences occur because of the resolution of the input images, which may not be the same in different studies.

Hardware Use

Wang et al [] used 8 NVIDIA Titan RTX GPUs for training their model. Similarly, Hatamizadeh et al [] and Hatamizadeh et al [] trained their models on a DGX-1 cluster with 8 NVIDIA V100 GPUs. Jia and Shu [] used 4 NVIDIA RTX 8000 GPUs for training the model, whereas Zhou et al [] used 4 GeForce RTX 2080 Ti GPUs. Liang et al [] and Liang et al [] trained their models on 2 parallel NVIDIA GeForce 2080Ti GPUs. Similarly, Huang et al [] trained the model on 2 NVIDIA RTX 3090 GPUs with 24 GB GPU memory, and Cheng et al [] used 2 NVIDIA V100 GPUs. Zhang et al [] and Li et al [] trained their models on a single NVIDIA Tesla V100 GPU, Li et al [] trained the model on a single 24 GB Titan RTX GPU, Luu and Park [] used a single NVIDIA RTX 3090 GPU for training the model, Liu et al [] trained the model using NVIDIA GTX 3080, and Dhamija et al [] used Tesla P-100 GPU.

Types of Data Used in the Studies

All the included studies (except 1 []) used MRI data for brain tumor segmentation. Zhou et al [] used histopathology images. In 16 studies, volumetric MRI data were used, whereas in 9 studies, the models were developed for 2D image data. Three studies [,,] reported experiments on both volumetric data and image data. shows the Venn diagram for the number of studies using 3D versus 2D data.

Figure 3. Venn diagrams showing the number of studies that used 3D versus 2D data. Data Sets Used in the Studies

Three studies [,,] reported using privately developed data sets or did not provide public access to the data. One study [] used both publicly available and privately developed data. The Brain Tumor Segmentation (BraTS) challenge data set of brain MRI has been the most popular data used in 17 (77%) of the 22 studies. More specifically, 6 studies used BraTS 2021 data [,,-], 5 used BraTS 2020 data [,,,,], 7 used BraTS 2019 data [-,,,,], 3 used BraTS 2018 data [,,], and 1 used BraTS 2017 data []. Some of these studies also used >1 data set, either independently or by combining them. Other data used in the included studies were MRI data from the Medical Decathlon used by Hatamizadeh et al [], the Cancer Imaging Archive data used by Dhamija at [], the UK Biobank data used by Pinaya et al [], data from the University Hospital of Ljubljana used by Pinaya et al [], the Calgary-Campinas Magnetic Resonance reconstruction data used by Huang et al [], data from the University Hospital of Patras Greece used by Zhou et al [], and data from the Cancer Hospital and Shenzhen Hospital used by Li et al []. One study [] did not specify the data. summarizes the data sets used in the included studies and provides the public access links for each data set. shows the Venn diagram for the number of studies using public versus private data.

Table 4. Data sets used in the included studies.Data set nameModalityAvailableURLUsed by the following studiesBraTSa 2021MRIbPublic[][,,-]BraTS 2020MRIPublic[][,,,,]BraTS 2019MRIPublic[][-,,,,]BraTS 2018MRIPublic[][,,]BraTS 2017MRIPublic[][]DecathlonMRIPublic[][]TCIAcMRIPublic[][]UK BiobankMRIPublic[][]University Hospital of LjubljanaMRIPublic[][]Calgary-Campinas MRd reconstruction data setMRIPublic[][]University Hospital of Patras GreecePathology imagesPrivate—e[]Cancer Hospital and Shenzhen Hospital data—Private—[]Not specifiedN/AfN/AN/A[,]

aBraTS: brain tumor segmentation.

bMRI: magnetic resonance imaging.

cTCIA: The Cancer Imaging Archive.

dMR: magnetic resonance.

eNot available.

fN/A: not applicable.

Figure 4. Venn diagrams showing the number of studies that used public versus private data sets. Evaluation Metrics

The Dice score and the Hausdorff distance measurements are popular metrics commonly used to evaluate segmentation performance on the BraTS MRI data sets. Hence, in the included studies, the Dice score and Hausdorff distance were the most common metrics used to assess the results of brain tumor segmentation. In summary, 19 studies [-] reported the use of the Dice score, whereas 15 studies [-,-,,] used both the Dice score and Hausdorff distance. Two studies [,] reported intersection-over-union. One study [] reported the focal score and Tversky score for the federated learning framework evaluation in addition to the Dice score and Hausdorff distance for the segmentation evaluation. One study [] reported peak signal:noise ratio, structural similarity index, and Fréchet Inception Distance in the assessment of the reconstructed MRI in addition to Intersection over Union and Dice scores for segmentation evaluation. One study [] used the concordance index and hazard ratio to evaluate the performance of survival analysis. One study [] reported sensitivity and precision, and 1 study [] reported precision and recall.


DiscussionPrincipal Findings

In this study, we reviewed the studies that used ViT to aid in brain cancer imaging applications. We found that most studies (19/22, 86%) were published in 2022, and almost one-third of these studies (6/19, 32%) were published in the second quarter of 2022. As ViT was first proposed in 2020 for natural images, it has only recently been explored in brain MRI and cancer imaging. Almost half of the studies (10/22, 45%) were published by authors from China. Furthermore, the authors from China published twice the number of studies published by authors from the United States. Other countries published approximately one-third of the studies (7/22, 32%).

Motivation of Using Transformers for Segmentation

The transformer module works on the self-attention concept, that is, calculating pairwise interactions between all input units. Thus, transformers are good at learning contextualized features. Although this learning of the contextualization by a transformer can be related to the upsampling path in a UNet encoder-decoder architecture, the transformer overcomes the limitation of the receptive field, and hence, it works better to capture long-range correlations []. In a UNet architecture, one may enlarge the receptive fields by adding more downsampling layers or by introducing larger stride sizes in the convolution operations of the downsampling path. However, the former increases the number of parameters and may lead to overfitting, whereas the latter sacrifices the spatial precision of the feature maps []. Nevertheless, the initial attempts to introduce transformers for brain tumor segmentation used the transformer block in the encoder or decoder or the bottleneck stage of the UNet-like architectures. These approaches were mainly driven by the success of UNet-based architectures for segmentation, such as nnUNet’s success on the BraTS2020 challenge []. In addition, until 2020, CNN-based models were the best performers for brain tumor segmentation. Therefore, nnUNet [] was the winning entry for the BraTS2020 challenge. With improved strategies and architectures, attention-based models performed competitively in recent years. Wang et al [] presented the TransBTS model, which was the first attempt to incorporate transformers into a 3D CNN for brain tumor segmentation. Although Hatamizadeh et al [] reported SWIN UNETR for brain tumor segmentation, and it was the first transformer-based model that performed competitively for the BraTS 2021 segmentation task. The TransBTS model was trained and tested on the BraTS2018 and BraTS2019 data sets, whereas the SWIN UNETR has been evaluated on the BraTS 2021 data set. However, for the BraTS 2021 data set, the winning entry was an extension of the nnUNet model [] presented by Luu and Park [] who proposed introducing attention in the decoder of the nnUNet to perform the tumor segmentation. As identified by Jia and Shu [], the UNETR removed convolutional blocks in the encoder, which may result in insufficient extraction of local context information when applied to volumetric MRI data. Overall, these approaches of combining transformers and CNNs are driven by the motivation to use the best of both worlds. These studies suggested that the best-of-both-worlds approach can be effective in improving brain tumor segmentation by combining CNNs with transformers. In theory, there are many possibilities for how we approach combining the advantages offered by the 2 different architectures.

Applications Covered in the Studies

Most of the studies included are those that either designed an attention-based architecture or used existing ViT architectures to achieve the task of tumor segmentation. In the brain segmentation tasks, the key focus is the segmentation of gliomas, which is the most common brain tumor. As most of these studies used 1 of the variants of the BraTS data set where the MRI data are annotated for 4 regions, these studies reported segmentation of the whole tumor, tumor core, enhancing tumor, and background. Some studies also reported using attention-based models for other applications related to brain cancer, such as survival prediction, MRI reconstruction, grading of brain cancer, and IDH genotyping.

Discussion Related to the Architectures

Among the studies that used the ViT module after a 3D CNN features extraction, the TransBTS [] was the first architecture (released in September 2021) and served as inspiration for many other architectures. The TransBTS architecture was motivated by the idea of incorporating global context into the volumetric spatial features of brain tumors. Furthermore, the work highlighted the need to use an attention module on image patches instead of flattened images, unlike previous efforts. Essentially, the flattening of high-resolution images makes the implementation impractical, as transformers have a quadratic computational complexity with respect to the number of tokens (ie, the dimension of the flattened image). The TranBTS architecture has downsampling and upsampling layers linked through skip connections; however, in the bottom part of the architecture, there are transformer layers that help with the global context capturing. These transformer layers are in addition to a linear projection layer and a patch embedding layer to transfer the image to sequence representation. So, in a way, the ViT serves as the bottleneck layer to capture long-range dependencies. Later, Jia and Shu [] presented a modification in the TransBTS architecture [] using 2 ViT blocks after the encoder part instead of 1 transformer block in the TransBTS. Specifically, the outputs of the fourth and fifth downsampling layers pass through a feature embedding of a feature representation layer, transformer layers, and a feature mapping layer and then pass through the corresponding upsampling 3D CNN layers. Compared with the TransBTS architecture, where the transformer was used at the end of the encoder and features representation was obtained after the fourth layer, Jia and Shu [], increased the depth to 5 layers and used the transformer in both the fourth and fifth layers. Therefore, after the fourth layer, the transformer effectively builds a skip connection with the corresponding layer of the decoder block.

Similarly, Zhang et al [] used a multihead self-attention–based transcoder module embedded after the encoder of a 3D UNet. However, they replaced the residual blocks of the 3D UNet with a self-attention layer that operated on a 3D feature map, followed by progressive upsampling via a 3D CNN decoder module. Pham et al [] also used transformer layers after a 3D CNN module and used a variational encoder to reconstruct the volumetric images. Li et al [] presented the SegTran architecture, which is again based on using the transformer modules after the features extraction with CNN, thus capturing the global context. Here, the authors suggested combining the CNN features with positional encodings of the pixel coordinates and flattening them into a sequence of local feature vectors.

Fidon et al [] used the TransUNet architecture [] as the backbone of their model and used the test time augmentation strategy to improve inference. Finally, Cheng et al [] presented the MTTUNet architecture, which is a UNet-like encoder-decoder architecture for multitasking. They used the CNN layers to extract spatial features, which were then processed by the bottleneck transformer block. Subsequently, the decoder network performed the segmentation task. In addition, the authors also used the transformer output to perform IDH genotyping, thus making it a multitask architecture.

Hatamizadeh et al [] presented the UNETR architecture that redefined the task of 3D segmentation as a 1D sequence-to-sequence classification that can be used with a transformer to learn contextual information. Therefore, the transformer block in the UNETR operates on the embedded representation of the 3D MRI input data. In effect, the transformer is incorporated within the encoder part of a UNet architecture. Compared with other architectures such as BTSWIN-UNet [], TransBTS [], SegTran [,], and BiTr-UNet [], which use the transformer as a bottleneck layer of the encoder-decoder architectures, the UNETR directly connects the encoded representation from the encoder with the decoder part. Compared with other methods where the encoder part uses 3D CNN blocks, such as TransBTS [] and BiTr-UNet [], the UNETR does not use a convolutional block in the encoder. Instead, the UNETR obtains a 2D representation for the 3D volumes and then uses the 2D ViT architecture that works on the 2D patches of the images. Each patch is treated as 1 token for the attention operation. UNETR does not rely on a backbone CNN for generating the input sequences and directly uses the tokenized patches.

Luu and Park [] introduced an attention mechanism in the decoder of the nnUNet [] to perform the tumor segmentation. They extended the nnUNet and modified it by using axial attention in the decoder of the 3D UNet. Furthermore, they doubled the number of filters in the encoder while retaining the same number in the decoder. Sagar [] presented the Vision Transformer for Biomedical Image Segmentation architecture, which used transformer blocks in the encoder and decoder of a UNet architecture. The architecture introduced multiscale convolutions for feature extraction that were used as input to the transformer block.

Dhamija et al [] explored the sequential and parallel stacks of transformer-based blocks using a UNet block. In principle, they used a transformer-based encoder and a CNN-based decoder connected in parallel with a UNet-based encoder and then in cascade with a UNet-based encoder. Apparently, the parallel combination (USegTransformer-P) outperformed the cascade combination by some margin. Zhou et al [] designed a parallel dual-branch network of a CNN (the ResNet architecture) and ViT and used it to grade brain cancer from pathology images. The dual-branch network established a duplex communication between the ResNet and ViT blocks that sends global information from the ViT to ResNet and local information from ResNet to the ViT.

Many similar architectures were probably released concurrently by different research groups or released very close in time to each other. For example, Li et al [] found that segmentation transformer [] and TransUNet [] were released concurrently with their own model. Therefore, it is not surprising that there are a few similarities between the approaches adopted by these studies.

Discussion Related to SWIN Transformers

In general, transformers are notoriously popular for the computational complexity of the order O (n2). For example, as identified by Jia and Shu [], UNETR stacks transformer layers and keeps the sequence data dimension unchanged during the entire process, which results in expensive computation for high-resolution 3D images. SWIN transformers helped overcome the computational complexity. Hence, it became a popular backbone architecture for many recent studies [-,,,] to overcome the computational complexity of transformer-based models. For example, Liang et al [] reported the use of a 2D SWIN transformer [] and a 3D SWIN transformer [] to replace the traditional architecture of ViT to overcome the computational complexity. Jiang et al [] used a SWIN transformer as the encoder and decoder rather than as the attention layer. Furthermore, they extended the 2D SWIN transformer to a 3D variant that provided a base module. Similarly, Liang et al [] used a 3D SWIN transformer block in the encoder and decoder of a 3D UNet-like architecture. The architecture was inspired by the SWIN transformer and the SWIN-UNet model; however, they replaced the patchify stem with a convolutional stem to stabilize the model training. Furthermore, they used overlapping patch embedding and downsampling, which helped to enhance the locality of the segmentation network.

Hatamizadeh et al [] extended the UNETR architecture to the SWIN-UNet transformer (SWIN UNETR), which incorporated a SWIN transformer in the encoder part of the 3D UNet. The decoder part still used a CNN architecture to upsample the features to the segmentation masks. As reported previously, the SWIN UNETR was the first transformer architecture that performed competitively on the BraTS 2021 segmentation challenge. Liu et al [] presented a transition net architecture that combined a 2D SWIN transformer with a 3D transition decoder. The transition block transforms the 3D volumetric data into a 2D representation, which is then provided as an input to the SWIN transformer. Subsequently, in the decoder part, the transition block transforms the multiscale feature maps into a 3D representation to obtain the segmentation results. Huang et al [] used a cascade of residual SWIN transformers to build a feature extraction module, followed by a 2D CNN network. This architecture was designed for MRI reconstruction.

Discussion Related to Model Complexity

In general, transformer architectures have a high computational complexity. The number of parameters for the architectures for the models, such as UNETR and TransUNet, are as large as 92 million and 116 million, respectively. The SWIN transformer-based architecture has a relatively smaller number of parameters (of the order of 30-45 million). For models with a higher number of parameters, the researchers had to rely on high-end GPU resources. Therefore, the computational setup reported in some of the included studies was built with as many as 8 GPUs. However, few studies also reported training the models on a single GPU with memory sizes ranging from 12 GB to 24 GB.

Discussion Related to 3D Data

Our categorization of a model designed for 3D or 2D data was either based on direct extraction of the information from the studies or the description of the model architecture in the included studies. Therefore, if a study did not specify whether it used the volumetric data directly or transformed the data into 2D images but provided a 2D model architecture, we placed the study in the 2D data category. Many modern deep learning methods for medical imaging, including transformers, rely on pretrained models as their backbones. These backbones can generalize well, making them good candidates for use in other related tasks, as they provide generalization, better convergence, and improved segmentation performance []. However, Liu et al [] argued that such backbone architectures are, in general, difficult to be migrated to 3D brain tumor segmentation. First, there is a general lack of 3D data, and most publicly available data sets provide 2D data. Second, medical images such as MRI vary in their distribution and style compared with natural images. These variations hinder the direct transformation of the 2D pretrained models for 3D volumetric data. Hence, they recommended transforming the 3D data into a 2D representation to enable its use with 2D transformers. However, numerous other studies have developed and used 3D models directly on volumetric data.

The most commonly used data in the included studies were the brain MRI of the BraTS data set. The BraTS data set has been phenomenal in facilitating the research on brain glioma segmentation. The BraTS challenge has served as a dedicated venue for the last 11 years and has established itself as a foundation data set in helping the community push the state-of-the-art in brain tumor segmentation. The BraTS data set has 4 MRI modalities, namely, T1-weighted, postcontrast T1-weighted, T2-weighted, and T2 fluid-attenuated inversion recovery. Furthermore, the data set provides baseline segmentation annotation from physicians.

Discussion Related to Evaluation Metrics

The Dice score and Hausdorff distance measurements have been more commonly reported, as these metrics are widely used to evaluate segmentation performance on the BraTS MRI data sets. In the included studies, the Dice score and Hausdorff distance were the most common metrics used to assess the results of brain tumor segmentation.

Strengths and LimitationsStrengths

Although there has been a surge in studies on the use of ViTs in medical imaging, only a few reviews have been reported on ViTs in medical imaging [,,]; however, their scopes are too broad. In comparison, to the best of our knowledge, this is the first review of the applications and potential of ViTs to enhance the performance of brain tumor segmentation. This review covers all the studies that used ViTs for brain cancer imaging; thus, this is the most comprehensive review. This review is helpful for the community interested in knowing the different architectures of ViTs that can help in brain tumor segmentation. Unlike other reviews [,,] that cover many different medical imaging applications, this review focuses on studies that have only developed ViTs for brain tumor segmentation. In this review, we followed the PRISMA-ScR guidelines []. We retrieved articles from the popular web-based libraries of medical science and computing to include as many relevant studies as possible. We avoided bias in study selection through an independent selection of studies by 2 reviewers and through validation of the selected studies and data extraction by the third reviewer. This review provides a comprehensive discussion on the different pipelines to combine ViTs with CNNs. Hence, this review will be very useful for the community to learn about the different pipelines and their working for brain tumor segmentation. In addition, we identify the computational complexity of the various pipelines to help the readers understand the associated computational cost of ViTs for brain tumor segmentation. We provide a comprehensive list of available data sets for brain MRI and hope that it will provide a good reference point for researchers to identify suita

留言 (0)

沒有登入
gif