A conventional‐to‐spectral CT image translation augmentation workflow for robust contrast injection‐independent organ segmentation

Purpose

In computed tomography (CT) cardiovascular imaging, the numerous contrast injection protocols used to enhance structures make it difficult to gather training datasets for deep learning applications supporting diverse protocols. Moreover, creating annotations on noncontrast scans is extremely tedious. Recently, spectral CT’s virtual-noncontrast images (VNC) have been used as data augmentation to train segmentation networks performing on enhanced and true-noncontrast (TNC) scans alike, while improving results on protocols absent of their training dataset. However, spectral data are not widely available, making it difficult to gather specific datasets for each task. As a solution, we present a data augmentation workflow based on a trained image translation network, to bring spectral-like augmentation to any conventional CT dataset.

Method

The conventional CT-to-spectral image translation network (HUSpectNet) was first trained to generate VNC from conventional housnfied units images (HU), using an unannotated spectral dataset of 1830 patients. It was then tested on a second dataset of 300 spectral CT scans by comparing VNC generated through deep learning (VNCDL) to their true counterparts. To illustrate and compare our workflow's efficiency with true spectral augmentation, HUSpectNet was applied to a third dataset of 112 spectral scans to generate VNCDL along HU and VNC images. Three different three-dimensional (3D) networks (U-Net, X-Net, and U-Net++) were trained for multilabel heart segmentation, following four augmentation strategies. As baselines, trainings were performed on contrasted images without (HUonly) and with conventional gray-values augmentation (HUaug). Then, the same networks were trained using a proportion of contrasted and VNC/VNCDL images (TrueSpec/GenSpec). Each training strategy applied to each architecture was evaluated using Dice coefficients on a fourth multicentric multivendor single-energy CT dataset of 121 patients, including different contrast injection protocols and unenhanced scans. The U-Net++ results were further explored with distance metrics on every label.

Results

Tested on 300 full scans, our HUSpectNet translation network shows a mean absolute error of 6.70 ± 2.83 HU between VNCDL and VNC, while peak signal-to-noise ratio reaches 43.89 dB. GenSpec and TrueSpec show very close results regardless of the protocol and used architecture: mean Dice coefficients (DSCmean) are equal with a margin of 0.006, ranging from 0.879 to 0.938. Their performances significantly increase on TNC scans (p-values < 0.017 for all architectures) compared to HUonly and HUaug, with DSCmean of 0.448/0.770/0.879/0.885 for HUonly/HUaug/TrueSpec/GenSpec using the U-Net++ architecture. Significant improvements are also noted for all architectures on chest–abdominal–pelvic scans (p-values < 0.007) compared to HUonly and for pulmonary embolism scans (p-values < 0.039) compared to HUaug. Using U-Net++, DSCmean reaches 0.892/0.901/0.903 for HUonly/TrueSpec/GenSpec on pulmonary embolism scans and 0.872/0.896/0.896 for HUonly/TrueSpec/GenSpec on chest–abdominal–pelvic scans.

Conclusion

Using the proposed workflow, we trained versatile heart segmentation networks on a dataset of conventional enhanced CT scans, providing robust predictions on both enhanced scans with different contrast injection protocols and TNC scans. The performances obtained were not significantly inferior to training the model on a genuine spectral CT dataset, regardless of the architecture implemented. Using a general-purpose conventional-to-spectral CT translation network as data augmentation could therefore contribute to reducing data collection and annotation requirements for machine learning-based CT studies, while extending their range of application.

留言 (0)

沒有登入
gif