A Multiclass Radiomics Method–Based WHO Severity Scale for Improving COVID-19 Patient Assessment and Disease Characterization From CT Scans

Artificial intelligence (AI)–based lung image analysis models can optimize the identification of patients who need specialized care. Standardized intensive care unit admission criteria have been proven to safely reduce intensive care unit overload. However, state-of-the-art AI systems face challenges in standardizing COVID-19 severity states.1–3

A systematic review by Born et al4 highlighted discrepancies between studies published by clinicians and AI communities on COVID-19 patient care. They found that most AI studies focused on diagnosis rather than tasks such as severity and prognosis assessment, which are more important in clinical practice. Also, it is pointed out that AI models have a low adoption rate in clinical settings due to the need for increased robustness and interpretability.4 Deep learning approaches for automated COVID-19 diagnosis using medical images, or quantifying lung tissue involvement using computed tomography (CT) scans, have been proposed and have demonstrated potential.5–11 However, these approaches currently face challenges in terms of standardization in patient condition characterization, making their implementation in health care systems difficult.12

On the other hand, AI models that assess COVID-19 patients' severity using medical imaging and clinical data must meet clinical requirements.13–15 However, many existing approaches use a single-class lesion segmentation model, which only classifies voxels as “healthy lung” or “lesion,” neglecting the various pathological patterns that occur during disease progression, reducing the models' accuracy in characterizing patient severity.

To overcome these issues, we propose AssessNet-19, an automated CT-based radiomics multiclass lung lesion segmentation model to assess disease severity based on a standardized World Health Organization Clinical Progression Scale (WHO-CPS) for COVID-19 patients.16 We hypothesize that evaluating patient disease severity by considering various pathological lung imaging findings, such as ground-glass opacities (GGOs), consolidations (CONs), pleural effusion (PLE), and band-like structures (BANs), can improve accuracy and contribute to identifying radiological markers to characterize COVID-19 disease severity.

MATERIALS AND METHODS Study Design

This study collected CT imaging exams and clinical data retrospectively from COVID-19 patients with acute lung disease from four medical centers: Inselspital Bern, University of Bern in Switzerland (IBE), Lindenhofspital Bern in Switzerland (SLB), University Hospital of Parma in Italy (UPA) and Yale University - New Haven Hospital in the USA (UYA). The data were collected between March 2020 and November 2021 from COVID-19 patients with acute lung disease. The clinical data were obtained during routine clinical workup and retrospectively collected and anonymized. The subjects included in the study had to have a positive COVID-19 PCR test and CT scan, with imaging and clinical data collected within 24 hours of each other for consistency.

Including these four sites was motivated by establishing a diverse dataset that would promote clinical consistency and mitigate potential biases associated with training on data from a single source. The selection of multiple sites resulted in a heterogeneous contribution, ensuring a broader representation of cases and enhancing the generalizability of the findings. The data set compiled for this study encompasses a comprehensive range of disease severities. It comprises CT scans obtained from 4 different manufacturers, using various reconstruction kernels, and includes scans conducted with and without intravenous contrast. It is important to note that the primary focus of this study was not to compare the performance of individual hospitals but rather to develop a robust model capable of running across different technical configurations in future applications.

Data

The study was approved by the Ethics Commission of the Canton of Bern (ID: 2020-02614, ID: 2020-00954), the Ethics Committee at Yale University–New Haven Hospital (ID: 2000027839), and the Ethics Committee at the University Hospital of Parma (ID: 1398/2020/OSS/AOUPR). All patients in the study gave consent for their data to be used for research. We retrospectively collected patients' medical imaging and clinical data from which a subset of the available cases was selected using 3 criteria: patients had to have an acute COVID-19 infection, a CT scan taken within 15 days before and 60 days after a positive COVID-19 test, and clinical data available within ±12 hours of CT acquisition. This study confirmed the presence of the SARS-CoV-2 virus in all included patients by retrieving their positive PCR test results from the database at each hospital center. The PCR test procedures followed internal hospital protocols by established guidelines by health authorities, including the WHO and local health agencies, ensuring the reliability and accuracy of the results.

The data assembly, curation, and image ground truth labeling were completed in 3 steps, as shown in Figure S1 (Supplementary Material, https://links.lww.com/RLI/A833). Imaging characteristics used in the developing and evaluation cohorts are summarized in Table S1 (Supplementary Material, https://links.lww.com/RLI/A833). An initial U-Net (R231) model released by Hofmanninger et al17 was used to automatically segment the left and right lungs from CT scans to create baseline lung segmentations. Finally, radiologists reviewed the automatically generated lung segmentations, making necessary corrections and manually segmenting each lung lesion according to the segmentation protocol.

Disease Severity Labeling–Ground Truth

The WHO score was fully automated using the WHO scoring algorithm on raw clinical data from the IBE and UYA centers and manually obtaining data from the UPA center's medical records. All scores were based on the WHO-CPS.16 The WHO score was calculated for subjects at centers IBE and UPA using clinical data within 24 hours centered around the CT examination. At UYA, clinical data were recorded daily and matched with the CT examination data from that same day. Manual review at centers IBE and UYA confirmed the automated scoring. Patients who died within 12 hours of CT were not included in the study.

The severity of the disease is evaluated based on 4 stages: ambulatory mild disease (symptomatic but not requiring hospitalization), hospitalized moderate disease (hospitalization with minimal treatment), hospitalized severe disease (hospitalization with noninvasive ventilation), and intubated critical disease (hospitalization requiring intubation, mechanical ventilation, and possibly organ failure). In addition, the AI model's evaluation involved categorizing the WHO scores into 3, 4, and 5 severity stages, which were grouped into 3 categories for assessment purposes (see Table 1).

TABLE 1 - The WHO-CPS and 3 Groups to Assess the Disease Status of COVID-19 Patients WHO Clinical Progression Descriptor WHO Score Groups to Assess the Disease Status of Patients With COVID-19 3 States 4 States 5 States Uninfected; no viral RNA detected 0 — — — Asymptomatic; viral RNA detected 1 Ambulatory mild disease Ambulatory mild disease Ambulatory mild disease Symptomatic; independent 2 Symptomatic; assistance needed 3 Hospitalized; no oxygen therapy* 4 Hospitalized disease Hospitalized moderate disease Hospitalized moderate disease Hospitalized; oxygen by mask or nasal prongs 5 Hospitalized; oxygen by NIV or high flow 6 Hospitalized severe disease Hospitalized severe disease Intubation and mechanical ventilation: Po 2/Fio 2 ≥ 150 or Spo 2/Fio 2 ≥ 200 7 Intubated critical disease Intubated critical disease Intubated critical disease Mechanical ventilation: Po 2/Fio 2 < 150 (Spo 2/Fio 2 < 200) or vasopressors 8 Mechanical ventilation: Po 2/Fio 2 < 150 and vasopressors, dialysis, or ECMO 9 Intubated critical disease plus organ failure Dead 10 — — — Note: This study did not evaluate symptoms in nonhospitalized patients, and therefore, we did not distinguish between scores 1, 2 and 3. The severity scoring was based according to the WHO working group guidelines,16 using the parameters of viral detection, hospitalizations, use of low-flow oxygen (by nasal cannula), or high-flow nonintubated oxygenation (by high-flow nasal cannula or continuous or noninvasive positive airway pressure ventilation), intubation and mechanical ventilation, the oxygenation ratios based on Spo2/Fio2 or Po2/Fio2, the administration of vasopressors, requirement of dialysis or ECMO (extracorporeal membrane oxygenation), and death. To calculate the oxygenation ratios, Fio2 is represented as a fraction (ie, 0.5 for 50% inhaled oxygen).

*If hospitalized for isolation only, record status as for ambulatory patient.

ECMO, extracorporeal membrane oxygenation; Fio2, fraction of inspired oxygen; NIV, noninvasive ventilation; Po2, partial pressure of oxygen; Spo2, oxygen saturation.

In this study, we used hierarchical multilabel classification to group the WHO severity scores into coarser labels. This approach was adopted to address the larger label space inherent in individual WHO scores, in contrast to the relatively smaller label space associated with the multilabel severity group. Our evaluation focused on selecting the appropriate number and hierarchy of labels for grouping the severity scores, ensuring coherence in the classification process. For the 3-label hierarchy, we collapsed the WHO scale as follows: “mild ambulatory” (MA) encompassing scores 1 to 3, “hospitalized disease” (HD) covering scores 4 to 6, and “intubated critical disease” (IC) representing scores 7 to 9. In the 4-label hierarchy, we categorized patients into MA for scores 1 to 3, “hospitalized moderate disease” (HM) for scores 4 and 5, “hospitalized severe disease” (HS) for score 6, and IC for scores 7 to 9. Finally, the 5-label hierarchy involved the following groupings: MA for scores 1 to 3, HM for scores 4 and 5, HS for score 6, IC for scores 7 and 8, and “intubated critical disease plus organ failure” (IC+) for score 9. By examining the different hierarchical label configurations, we assessed the performance and coherence of the selected multilabel hierarchy in accurately representing the severity of COVID-19 patients based on the WHO scores.

Medical Image Labeling–Ground Truth

Figure 1 illustrates 5 pathological CT findings used to train the multiclass lesion segmentation model. The data curation process included manual segmentation of 10 equidistant slices per subject, covering the lung from apex to the base, taking 2–6 hours, depending on the case complexity. The segmentation team consisted of 2 experienced radiologists (20 and 9 years of experience), 2 residents (2 years of experience), and 4 medical students trained by expert radiologists. At least 1 other team member reviewed segmentations to ensure quality. The lung and lesion segmentation followed the 2008 thoracic imaging definitions of the Fleischner Society.18 The multiclass segmentation protocol ensured that each lung lesion segmentation remained within the boundaries of the lung segmentation. In addition, strict nonoverlapping criteria were enforced for different lesion classes. This was implemented due to the nature of the U-Net multiclass segmentation network, which was specifically designed to assign a single label to each voxel.

F1FIGURE 1:

Manual segmentation of 5 different lesion classes such as ground-glass opacity (GGO), consolidation (CON), pleural effusion (PLE), band-like structure (BAN), and bronchi/traction bronchiectasis (TBR).

Labeling the 5 radiographic pathologies involved creating a coarse mask of contiguous lesions using a paint tool and then manually correcting and checking the segmentation using tools such as the erase tool and logical operator tool in 3D slicer. The segmentation of each pathological lung imaging finding was performed differently. For example, GGO, BAN, PLE, and TBR lesion segmentation labels were created in the lung window, whereas the soft tissue window (W: 350; L: 50) was used for CON segmentation. CON lesions were initially identified using the threshold tool to create a coarse mask for segmentation. Subsequently, manual correction of the CON label was performed in the lung window using the erase tool. If overlapping borders with PLE were present, they were subtracted using the logical operator tool. All vessels within the CON area were included, whereas bronchi and BAN were excluded if not filled with fluid. After segmenting CON, the remaining opacified lung lesions were segmented as GGO. For GGO, the segmentation was carried out manually instead of using a threshold method. Large and intermediate vessels and visible bronchi were excluded. Band-like structures were defined as dense structures with a tubular-like shape, excluding pleura, and atelectasis. Three-dimensional visualization was used to identify BAN structures, as they tend to be intertwined with CON or GGO. The bronchial lumen was segmented for the TBR class, using 3D visualization to address motion artifacts and pseudobronchi. Please refer to the Supplementary Material, https://links.lww.com/RLI/A833, for visualizing the contouring process as per the segmentation protocol.

Data-Centric AI Model to Automate the Multiclass Lesion Segmentation and Disease Severity Assessment

AssessNet-19 model is a data-centric AI model developed through incremental cycles, where subjects were selected from the needs of the previous model. Figure 2 shows the final design of the AssessNet-19 model.19 The pipeline includes image preprocessing, lung segmentation, multiclass lesion segmentation, and radiomics feature extraction for severity assessment prediction.

F2FIGURE 2:

Overview of the AssessNet-19 model, a 2-stage pipeline for assessing COVID-19 patients' disease severity. First stage: Ten equidistant axial slices are extracted from each CT scan and paired with ground truth segmentation to train two 2D U-Net networks for lung and multiclass lesion segmentation. The 2D segmentation outputs are then used to construct the 3D volume of the lungs and multiclass lesions for quantification. Second stage: Radiomics feature extraction and selection process applied to each lesion class. Then, the elected features are concatenated, normalized, and inputted into the ML algorithm. Finally, the model was fine-tuned through cross-validation and XGB-classifier based on majority votes for every 10 axial slices per subject.

The image preprocessing pipeline extracts axial slices and corresponding lesion segmentations from each CT scan and reshapes them to fit the 2D format required by the nnU-Net framework.20 It also uses cropping, normalization, and resampling techniques, such as cropping to intensity values between 0.5th and 99.5th percentiles, normalizing using a z-score, and resampling to median voxel size using third-order spline interpolation for image data and nearest-neighbor interpolation for segmentation masks. The multiclass lung and lesion segmentation models use a 2D U-Net architecture,21 and were implemented separately with the nnU-Net framework.20 They were trained on 118 subjects using an NVIDIA-RTX-A6000, taking 16 hours for 1000 epochs per fold, averaging 57.90 ± 0.54 seconds per epoch for the lesion segmentation model, and 15.29 hours for 1000 epochs per fold for the lung segmentation model, averaging 55.06 ± 0.38 seconds per epoch. Section S1 in the Supplementary Material, https://links.lww.com/RLI/A833, provides the implementation details, and Figure S2 in the Supplementary Material, https://links.lww.com/RLI/A833, presents the learning curves for internal training and validation sets for each model. One hundred seven radiomics features were extracted from each axial slice per subject and each lesion class using the pyRadiomics library.22 Essential features from each lesion were selected using LASSO.23 Following the image biomarker standardization initiative, shape features were normalized based on lung segmentation to prevent bias due to lung anatomy.24 Finally, the radiomics-based severity prediction model was trained using the extracted features, and various machine learning models were tested using F1-score as a metric on a 5-fold cross-validation procedure. The best-performing method was XGBoost,25 which was chosen for evaluation in the test cohorts.

Benchmarking

In the severity assessment by radiologists, 3 experienced lung radiologists with 20, 14, and 9 years of experience qualitatively and quantitatively assessed the disease severity using a 4-class severity scale and the disease extent for GGO and CON in the percentage of lung volume. The radiologists were blinded to all patient information, including the final severity score. Furthermore, we compared 3 ways of categorizing the disease severity as a 3-, 4-, or 5-label hierarchy to identify the suitable hierarchical multilabel classification task in terms of performance and coherence to represent the severity of disease states based on the WHO-CPS.

Statistical Analysis

The statistical analysis focused on evaluating the quality of automated lung and multiclass lesion segmentation using 2 metrics: Dice (Dice similarity coefficient) and Hausdorff distance. In addition, the performance of the WHO severity prediction model was assessed using multiple metrics, including confusion matrices for accuracy analysis, AUC-ROC (area under the receiver operating characteristic curve) for performance evaluation, and F1-score for a comprehensive assessment. These metrics and the corresponding confusion matrices provide insights into the model's effectiveness in accurately classifying disease severity categories.

RESULTS Data Set Stratification and Patient Characteristics

The development and evaluation cohorts were compiled to ensure a balanced distribution of WHO scores. A stratified shuffle split approach was used to divide the development cohort into train and test sets, while preserving the same percentage for each WHO class as in the complete set. Figure 3 shows the distribution of WHO scores and disease severity labels in the training, testing, and second testing sets. First, a development cohort of 145 subjects: 70 from center IBE, 31 from center UPA, and 44 from center UYA. A total of 1450 axial slices were manually segmented. The subjects were then randomly divided into a set of 118 cases for training and a set of 27 cases for testing the AssessNet-19 model. The evaluation set comprised 90 subjects: 78 from IBE and 12 from UPA. This cohort was used to evaluate AssessNet-19 in a fully automated fashion. The study involved patients who were transferred from other hospitals in a more critical condition, which resulted in limited data availability regarding the timing of the first PCR test. The study included 58 CT scans conducted before a confirmative positive COVID-19 PCR test, representing a subset of 235 CT scans collected from 3 hospitals.

F3FIGURE 3:

Distribution of WHO scores and disease severity labels among the training, testing, and evaluation sets. The abbreviations used in the figure are as follows: N represents the number of subjects, SK refers to soft-kernel, and MK represents medium-soft-kernel.

The first cohort was divided into training and testing sets using stratified sampling based on WHO score, deceased subjects, and CT kernels. Figure 3 shows the distribution of WHO scores for the development cohort and second test set. Table 2 summarizes the clinical characteristics recorded for each partition in the training and testing sets, including demographics, anthropometric variables, comorbidities, laboratory variables, and hospitalization characteristics obtained from medical records. This study used clinical data to calculate the WHO severity score, following the guidelines provided by Marshall et al.16 To compute the WHO severity score, we derived the following clinical parameters from the available clinical variables: hospitalization status, mortality status, low Spo2 levels, low Po2 levels, vasopressor usage, intubation or tracheostomy procedure, high-flow oxygen therapy requirement, low-flow oxygen therapy requirement, dialysis requirement, and ECMO (extracorporeal membrane oxygenation) support.

TABLE 2 - Patient Characteristics Description Among the Training, Testing, and Second Testing Set Clinical Characteristics Training Set n = 118 Testing Set n = 27 Second Testing Set n = 90 Distribution IQR (25%, 75%) Available Distribution IQR (25%, 75%) Available Distribution IQR (25%, 75%) Available Demographic characteristics  Age 62.4 ± 14.3 (54, 70) 118 63.3 ± 14.78 (54.5, 73.5) 27 61.4 ± 11.97 (54, 69) 80  Gender (female) 41 (34.74%) — 118 9 (33.33%) — 27 24 (30.0%) — 80  Gender (male) 77 (65.25%) — 118 18 (66.66%) — 27 56 (70.0%) — 80 Anthropometric characteristics  Height, cm 171.82 ± 9.47 (−165, 178) 72 167.04 ± 10.76 (158.3, 172) 15 171.11 ± 8.28 (−165, 176) 53  Weight, kg 85.69 ± 16.84 (73.9, 95.9) 79 86.30 ± 22.91 (70.4, 99.6) 17 83.84 ± 14.99 (72.4, 92.2) 52  BMI 29.29 ± 5.64 (25.9, 31.9) 73 31.09 ± 6.86 (27.3, 36.3) 16 29.04 ± 5.99 (24.3, 34.4) 47 Comorbidities characteristics  Asthma 10 (8.69%) — 115 4 (14.81%) — 27 4 (5.0%) — 80  Diabetes 27 (24.32%) — 111 12 (48.0%) — 25 33 (41.25%) — 80  COPD 15 (13.51%) — 111 3 (12.%) — 25 15 (18.75%) — 80  Lung fibrosis 4 (3.47%) — 115 0 (0.0%) — 26 1 (1.25%) — 80 Laboratory characteristics  eGFR 70.72 ± 26.79 (47.0, 90.0) 108 71.08 ± 29.48 (46.7, 90.0) 24 72.13 ± 23.90 (58.5, 90.0) 79  WBCs 9.04 ± 5.08 (5.3, 11.0) 84 9.93 ± 3.63 (7.4, 13.9) 17 11.10 ± 5.01 (8.01, 23.2) 45  Lymphocytes 1.13 ± 0.68 (0.8, 1.3) 64 1.03 ± 0.49 (0.64, 1.31) 16 1.06 ± 0.59 (0.71, 1.43) 16  Neutrophils 6.84 ± 4.28 (3.9, 8.5) 45 9.39 ± 3.76 (6.55, 12.0) 10 8.48 ± 6.46 (3.3, 13.5) 12  CRP 126.4 ± 100 (36.9, 217) 72 112.7 ± 113 (38.8, 157) 15 142.5 ± 107 (62.5, 222) 43

The clinical variables were obtained within ±12 hours of CT acquisition per subject.

Note: Continuous variables are represented as median, standard deviation, and interquartile range (IQR). Categorical variables are expressed as numbers and percentages of the available subjects.

BMI, body mass index; COPD, chronic obstructive pulmonary disease; eGFR, estimated glomerular filtration rate; WBCs, white blood cell count; CRP, C-reactive protein.


Multiclass Lesion Segmentation Performance

We evaluated the performance of AssessNet-19, our automated multiclass lung lesion segmentation model, using a test set of 27 subjects. AssessNet-19 was trained with 10 equidistant axial slices per CT scan, addressing the multiclass problem with standard nnU-Net hyperparameters. The learning curves are available in the Supplementary Material, https://links.lww.com/RLI/A833.

The evaluation demonstrated that AssessNet-19 consistently achieved accurate segmentations across all disease severities, aligning well with the ground truth. The model's performance varied across different lesion categories, with mean Dice similarity coefficients of 0.7 ± 0.27 for GGO, 0.68 ± 0.34 for CON, 0.65 ± 0.31 for PLE, and 0.30 ± 0.16 for BAN. In addition, AssessNet-19 exhibited improved consistency in segmenting shapes and sparse lesions, as indicated by the smaller Hausdorff distance. For a qualitative comparison, please refer to Figure 4, which illustrates segmentations produced by AssessNet-19 and the corresponding ground truth for selected cases involving COVID-19 patients with varying disease severities.

Radiomics Signatures to Characterize the COVID-19 Disease

Radiomics signatures play a crucial role in characterizing COVID-19 disease, and our study used a comprehensive process to extract and analyze quantitative features from medical images. This process encompassed automated multiclass lesion segmentation to define regions of interest (ROIs), extraction of radiomic features related to shape, intensity, texture, and spatial relationships within the ROIs, and normalization of the radiomics features. The primary goal was to reduce the dimensionality of the feature space and identify the most relevant features for classification or prediction tasks.

F4FIGURE 4:

Qualitative results of AssessNet-19 for multiclass lesion segmentation in COVID-19 patients with varying disease severities, including ambulatory mild, hospitalized moderate, and critically intubated cases.

In our study, we focused on identifying 4 key radiomics signatures that effectively characterize the severity of COVID-19 disease. These signatures provide valuable insights into the assessment task, specifically lung lesion segmentation and lesion extent quantification. Figure 5 visually presents the radiomics signatures for the 4 severity states: MA, HM, HS, and IC. Figure 5A showcases the radiomics signatures for the single-class model, whereas Figure 5B displays the signatures for the multiclass model. The spider charts in these figures illustrate the average values of the z-normalized radiomics features used in both models. Figures 5C and 5D demonstrate 3D reference lung and lesion segmentations, respectively, highlighting the segmentation outputs for each disease severity state from the single and multiclass models.

F5FIGURE 5:

Radiomics signatures to characterize the COVID-19 disease. Radiomics signature of single-class and multiclass models using 4-disease state classification and a representative 3D lung and lesion segmentation for each disease state. The radiomics features were normalized and mainly composed of lesion extension, intensity histograms, and texture features such as the co-occurrence matrix, size zone matrix, neighboring tone difference matrix, dependence matrix, and run length matrix. Radiomics values fall within a range of −1 to 1.

TABLE 3 - Quantitative Evaluation Among Single-Class Lesion Model, Multiclass Lesion Model, and Radiologists' Qualitative Score Assessment on the Development Cohort Using 27 Subjects Manually Segmented and the Second Evaluation Cohort Using 90 Subjects Fully Automated Segmented Classification Task Development Cohort–Ground Truth Test (n = 27) Evaluation Test (n = 90) Single-Class Model Multiclass Model Radiologists' Quality Score Single-Class Model Multiclass Model Radiologists' Quality Score F1-Score F1-Score F1-Score F1-Score F1-Score F1-Score 3-WHO Classes 0.71 ± 0.03 0.90 ± 0.03 0.63 ± 0.10 0.66 ± 0.01 0.79 ± 0.02 0.69 ± 0.03 4-WHO Classes 0.52 ± 0.03 0.74 ± 0.02 0.45 ± 0.09 0.64 ± 0.02 0.76 ± 0.02 0.63 ± 0.02 5-WHO Classes 0.39 ± 0.03 0.67 ± 0.03 0.51 ± 0.02 0.66 ± 0.01

Radiologists' Qualitative Score = the mean F1-score was determined by majority voting among 3 radiologist experts to assess the severity score qualitatively using only CT images.

To provide further insights into the classification process, we present Table 4, which showcases the relationship between laboratory test results, CT scans of lung lesions, and radiomics signatures in classifying the 4 severity states of COVID-19. This table includes information on crucial laboratory tests (lymphocytes, neutrophils, white blood cell count [WBCsn], and estimated glomerular filtration rate [eGFR]) for each severity stage. The laboratory test results were obtained within ±12 hours of CT acquisition per subject. The table's CT lung lesion quantification section displays the extent of 4 types of lung lesions (GGO, CON, PLE, and BAN) for each severity stage, represented as percentages commonly used by radiologists in clinical practice. Finally, the radiomics disease signature section presents the features used in the multiclass model for each stage of COVID-19 severity. These features encompass lesion extension, intensity histograms, and various texture features such as the co-occurrence matrix, size zone matrix, neighboring tone difference matrix, dependence matrix, and run length matrix. This information provides additional insights into the patient's condition and aids in explaining the predictions made by the AssessNet-19 model.

TABLE 4 - Radiomics Signatures to Characterize the COVID-19 Disease Multiclass Radiomics Disease Severity Signatures Features A. Mild (n = 12) H. Moderate (n = 43) H. Severe (n = 19) I. Critical (n = 44) P Laboratory tests  Lymphocytes 0.88 ± 0.66 0.97 ± 0.46 1.09 ± 0.45 1.57 ± 1.03 <0.001  Neutrophils 5.11 ± 2.63 5.33 ± 3.41 6.94 ± 3.54 9.75 ± 5.02 0.015  WBCsn 7.36 ± 2.02 6.73 ± 3.82 8.65 ± 4.15 12.13 ± 5.62 0.0012  eGFR 76.16 ± 24.15 83.50 ± 16.33 81.58 ± 16.91 53.28 ± 29.4 <0.001 CT-based quantification of lung lesions  GGO lesion extent 6.99 ± 9.31 22.28 ± 20.50 34.85 ± 19.52 31.62

留言 (0)

沒有登入
gif