The study comprised a retrospective analysis of both clinical and research data from the Department of Nuclear Medicine and PET at Aarhus University Hospital. The clinical cohort consisted of 38 selected patients with inadequate image quality (Table 1). These patients underwent PET imaging for dementia and movement disorders between January 2022 and May 2023. Nuclear medicine physicians assessed the standard PET images as compromised, grading them as suboptimal or even non-diagnostic, primarily due to head motion artifacts. Within the clinical cohort, 11 subjects underwent FDG imaging, 12 were subjected to [18F]N-(3-iodopro-2E-enyl)-2beta-carbomethoxy-3beta-(4’-methylphenyl)-nortropane (PE2I) imaging [12], and 15 had [18F]flutemetamol (FMM) imaging [13]. The three tracers are used to uncover different characteristics of dementia and have markedly different uptake patterns: FDG is used to evaluate reduced cortical glucose metabolism pertinent to dementia evaluation, PE2I is used for dopamine transporter visualization with a focus on striatal regions, and FMM is used for amyloid imaging. The FMM image is notably sensitive to motion because it is essentially a white matter (WM) image that is assessed for grey matter (GM) uptake. The institutional review board at Aarhus University Hospital granted access to patient files. Individual patients’ consent was waived by the institutional review board due to the retrospective nature of the study.
Table 1 The four cohorts: demographics and PET scan informationThe research cohort comprised 25 subjects (Table 1), selected from a previously published study [14]. This cohort included 15 patients diagnosed with Lewy body dementia and 10 age-matched cognitively intact elderly controls with Montreal Cognitive Assessment of 26 or above. All subjects underwent PET after injection of [18F]fluoroethoxybenzovesamicol (FEOBV), a radiotracer utilized for vesicular acetylcholine transporter imaging [14, 15]. The standard PET images for all participants were deemed of adequate research quality based on visual inspection, and the selection was not influenced by factors like head motion. The research study was conducted according to the Declaration of Helsinki and approved by the Regional Ethics Committee. All participants provided written informed consent.
Data acquisition and standard Brain PET Image ReconstructionAll subjects were scanned on a Biograph Vision 600 PET/CT (Siemens Healthineers, Knoxville, TN, USA) scanner with a CT (Ref mAs 150; 120 kV) for attenuation correction followed by a PET brain scan (see Table 1 for injected activities and scan times). The patients were positioned in the scanner using a head holder and instructed not to move during the PET/CT. PET data were acquired in list-mode. Brain PET data were reconstructed with attenuation and scatter correction using resolution modeling (PSF) and time-of-flight (TOF), 8 iterations, 5 subsets, 440 matrix, zoom 2, no post-filter, with a final voxel size of 0.83 × 0.83 × 1.65 mm3, and a spatial resolution around 2 mm FWHM. These images will be denoted Standard Images.
Motion-compensated Brain PET Image ReconstructionThe data-driven motion correction algorithm is based on the principle that motion is distinctively assumed not to be a continuous process. Specifically, the method assumes that head studies consist of alternating periods of quiescence and motion. The approach is to align data that exhibits quiescent periods that lends itself to the piecewise reconstruction and re-assembly of motion free images. The procedure is performed in three steps:
A. Subdivide the listmode into a series of 1.0 s frames. These are used to identify subset of consecutive frames in which the head motion is not detectable (“motion frame”). Each motion frame must have adequate count statistics for deriving a motion-compensating transform to a target frame.
B. Estimate transforms between motion frames.
C. The motion frames and the corresponding transforms are used in the reconstruction of the PET data with the appropriate correction factors.
Each step (A, B, C) is expanded below and in Fig. 1.
Fig. 1Illustration of the three steps (A, B, C) in the data-driven motion-compensated image reconstruction algorithm. List-mode data (A1) is searched to identify motion events based on 1-sec center-of-distributions (A2) and used to define a series of motion frames (A3) with gaps where low-count motion frames are discarded. Non-attenuation-corrected (NAC) image reconstruction (B1) is performed for each motion frame. The summing tree algorithm and mutual information (B2) are used to estimate the rigid body transformation (B3) between the initial frame and each motion frame. Sinograms and transformations for each motion frames (C1) are used in the OSEM algorithm for reconstructing the MoCo Image (C2)
A. Deriving time bins between motion eventsMotion events are identified in the list-mode file and used to define a subset of consecutive frames, where in which motion is considered to be not detectable or negligible using the criteria below. The method is similar to the previously described Merging Adjacent Clusters method [16].
1. For each frame time sampling interval d > = 1.0 s, a center-of-distribution (COD) is calculated by finding the most likely location pn(xn,yn,zn) in image space of each line of response event LORn(i, j,\(\:\varDelta\:\text\)) where i and j are detector pairs and \(\:\varDelta\:\text\:\text\text\:\)time-of-flight information if available. The averaging of these positions is referred to as histo-binning.
2. If the COD is sufficiently different from the cumulatively prior sampling interval, the start of a new motion frame is declared. “Sufficiently different” is determined by comparing the motion change in COD versus the positional uncertainty related to the noise level characteristic of the scan. For example, in a scan with fewer counts, the noise level is higher, and the COD must move a greater distance to trigger a new motion frame. See [16] for further details.
3. If the COD is considered stable, the current motion frame is continued.
4. In periods where the COD changes more than 0.5 mm for each 1 s minimal time interval the frames are discarded as being within a motion event.
B. Estimating transforms between motion framesThe algorithm uses a 3D to 2D projection approach which increases the noise characteristics for each projection space and enables registration even in noisy data. This is performed using the Summing Structural Tree approach [17].
1. For each motion frame, a non-attenuation-corrected (NAC) reconstruction is performed.
2. 2D Projections are calculated in x, y,z directions to optimize counts used for registration.
3. The rigid body transformations are calculated for all x, y,z directions iteratively since moving in one direction will affect the other directions.
4. The registration works by first comparing and correcting the motion frames that are most similarly positioned. Then the algorithm adds the counts of the registered frames and iteratively compares again the most similar frames using the Summing Tree Structural Motion Correction algorithm [17].
5. The final target image is the first frame, which is assumed to be well registered to the CT.
6. The objective function for the registration is the Mutual Information Criterion [18].
C. Iterative Reconstruction with motion correctionOnce the transforms are derived for all time motion frames, this information can be incorporated into the reconstruction of the image, considering the correction factors such as attenuation and scatter correction which depend on the µmap corresponding to the PET events where they actually happened. All motion frames and corresponding transform are built into the iterative reconstruction as follows [19].
$$\:^\left(b\right)=^\left(b\right)\fracM\left(B\left(\frac\right),t\right)}\sum\:_M\left(B\left(\frac^\left(^\left(b\right),t\right)\right)+O\left(l,t\right)}\right),t\right)$$
Where \(\:b\): Image voxel; \(\:l\): LOR bin; \(\:t\): time-frame; \(\:O\): (Random*Norm + Scatter)×AFC; \(\:P\): Prompt; \(\:A\): AFC; \(\:F\)(): Forward projection; \(\:B\)(): Back projection; \(\:M\left(b\right)\): Motion correction in Image-Space; \(\:^\left(b\right)\): Inverse motion correction in Image-Space; \(\:M\left(l\right)\): Motion correction in Sinogram-Space.
There are two principal assumptions: First, there is no motion between the initial CT scan and the first motion frame. We make no attempt to correct for motion in this interval. Consequently, potential attenuation correction mismatch in the final reconstructed image in some cases may be observed. This could be addressed by combining the MoCo method with a technique to align CT and PET images or by using deep-learning based CT-free approaches for attenuation and scatter correction [20,21,22]. Second, motion events are relatively brief, with extended periods of quiescence between them: if a patient moves their head continuously, it appears as a series of short motion frames, some of which may be discarded. The remaining data are scaled by a decay-corrected factor to compensate for the missing events. This scaling is done regardless of the reason the motion frame was discarded.
In short, the method automatically detects motion during the PET data acquisition and transforms all data back to a first quiescent part of the PET scan [16, 17]. Thus, assuming no motion between the CT and the start of the PET data acquisition, the PET data will be fully motion-corrected and aligned to the CT to achieve accurate correction for attenuation and scatter. These images will be denoted MoCo Images, and they were reconstructed using investigational prototype software (e7tools; Siemens Healthineers) using the same reconstruction parameters as the Standard Images.
Phantom StudyA Hoffman phantom [23] was filled with an FDG solution simulating a 4:1 uptake ratio between GM and WM. The phantom was positioned within a custom-made Phantom Movement System (Supplemental Fig. S1), which facilitated precise translations (x, z) and rotations (x, z) to within accuracies of less than 1 mm and 1 degree, respectively.
The phantom was scanned on a Biograph Vision 600 (Siemens Healthineers) during nine scenarios (S). See Supplemental Table S1 for detailed information about each scenario. Each scenario involved a CT scan followed by a dynamic PET scan.
• Scenario REF/S0 served as a motion-free reference for which the algorithm was tested to demonstrate “do no harm” when no motion is present. This allow for the method to be used universally both in the absence and presence of motion, making technical as well as clinical implementation easier.
• The subsequent scenarios (S1-S7), the phantom was displaced in a series of 118-second stationary phases separated by 2-second movements.
• S8 was set up to stress-test the algorithm with long continuous rotations throughout the entire scan.
The initial phantom filling consisted of a 65 MBq FDG solution. Due to radioactive decay, there were variable activity concentrations across scenarios, which were compensated through randomized decimation of the list-mode files from scenarios S0-S7 using an investigational software prototype LMChopper (e7tools, Siemens Healthineers, Knoxville, TN, USA) before PET image reconstruction into Standard and MoCo Images. In each scenario, we checked that the MoCo reconstruction algorithm correctly detected the phantom motion by comparing to the time points when the phantom was moved, i.e. every 120 s.
Clinical studyThe clinical cohort comprised data from 38 clinical patients who underwent brain PET scans using three distinct tracers. This subset specifically patients for whom an expert nuclear medicine physician identified the Standard Image to be suboptimal, or of non-diagnostic quality. Our analysis concentrated on determining whether MoCo Images could enhance image quality, potentially obviating the necessity for rescans. For each tracer, the Standard Image and MoCo Image were blinded and randomized for the clinical read.
Two experienced nuclear medicine physicians (JA, PB) independently conducted a blinded evaluation of the image quality, employing a 5-point Likert scale, for sharpness and quality, defined as:
1) Unacceptable image quality: extremely blurry / obscured by artifacts. PET examination needs to be repeated.
2) Poor image quality: Blurry with artifacts. PET examination needs to be repeated.
3) Acceptable image quality: This is the minimum acceptable image quality. Still some blurriness/artifacts.
4) Good image quality: Minor blurriness/artifacts.
5) Excellent image quality: No sign of blurring or artifacts.
Grades 1–2 indicate a PET image that is of insufficient quality for clinical diagnostics, and grades 3–5 indicate a PET image that can be used for clinical reporting or be included in data for a scientific paper. Conclusively, each physician was prompted to select the ‘superior’ image for every patient. The evaluations were executed individually, precluding mutual consultation.
Research StudyThe research cohort comprised data obtained from 25 elderly subjects. This subset specifically incorporates individuals anticipated to remain stationary with minimal head movement during the PET/CT examination. For this cohort, we focused on the question of whether motion correction could further enhance image quality, or whether motion correction could lead to degradation of image quality, for patients with minimal or no head movement.
In addition to the Standard Images and MoCo Images, we also made a manual image-based motion-compensated image that will be denoted IB-MoCo Images. 30-min PET data were binned into six 5-min frames that were individually reconstructed. The six images were visually inspected in PMOD 4.0 (PMOD Technologies Ltd, Zürich, Switzerland), images that were degraded by in-frame motion would be discarded (this was not needed), and the remaining images were registered to the patient’s T1 MRI image and averaged. IB-MoCo, sometimes used in research projects, improves image quality but it does not account for in-frame motion and mismatch between CT and PET leading to suboptimal attenuation and scatter correction.
For each subject, the Standard Image, IB-MoCo Image, and MoCo Image were masked and randomized. An experienced nuclear medicine physician (JH) did a blinded evaluation of the image quality employing the previously defined 5-point Likert scale and was prompted to select the ‘superior’ image for every subject.
Comments (0)