The high prevalence of osteoporosis in postmenopausal women and elderlies has been increasingly drawing attention due to its high risk of associated bone fractures, [33]. The current methods for clinical prognosis of bone fragility fractures mainly rely on Dual-energy X-ray Absorptiometry (DXA) measurements of bone mineral density (BMD). However, BMD is only a measure of bone mass loss without considering bone microstructural features, thus counting only 50-60% of bone fragility fractures [1]. Recently, trabecular bone score (TBS), which provides a measure of microarchitectural changes in bone [27], [10], was approved by FDA in clinical application to improve BMD based prognosis of bone fracture risks. However, its capability in fully capturing bone microarchitectural changes is still debatable [10],
In recent years, researchers have employed finite element modeling (FEM) methods to assess bone fragility risks based on quantitative computed tomography (QCT) images [2], [21]. These methods could not only enable an accurate quantification of 3D bone geometry but also give rise to a direct assessment of bone strength for individual patients. However, QCT-based FEM modeling approaches still face two major challenges: (1) QCT has relatively low image resolutions (voxel size around 0.3-0.5 mm3 at best), thus potentially missing important microstructural information in the modeling process, and (2) FEM modeling is time-consuming, costly, and technically challenging.
To address these issues, researchers have recently endeavored to use deep learning (DL) approaches to reconstruct high-resolution images from low resolution CT images of bone with some promising results [8]. In addition, DXA, MRI, and QCT images have been directly used to predict bone fracture risks via pre-trained DL models [13], [17], [22]. As s preliminary attempt by our group, it was found that DXA images in multiple projections could be used to train high-fidelity DL models in prediction of microstructural features [36] and mechanical properties of trabecular bone cubes [35], [34]. It is important because these results have confirmed that microstructural and mechanical properties of trabecular bone cubes could be extracted from two dimensional (2D) DXA images, irrespective of low resolutions (pixel size around 0.3 mm2 at best). However, it is technically challenging to translate such DXA based approaches to assessment of microstructural and mechanical properties of the whole bone.
Such limitations of DXA can be potentially overcome by using QCT. First, trabecular bone cubes could be directly dissected from whole bone QCT models. Next, the microstructural and mechanical properties of the trabecular cubes could be extracted from QCT images instead of multiple DXA projections via a pre-trained DL model. Hence, the local microstructural and mechanical properties of the whole bone could be readily mapped using the DL model, thus significantly improving prediction of whole bone fractures. The challenge is that training a high-fidelity DL model requires a large dataset, which is practically difficult to be obtained in vivo or cadaver samples.
As an alternative, our lab has recently developed a novel probability-based mathematical framework to generate digital trabecular bone samples that could preserve microarchitectural features of real trabecular bone samples at different anatomic locations [15]. With this parametric approach, digital bone samples could be synthesized to cover a large variety of microarchitectural features, thus giving rise to a new method to acquire a big dataset required for training high fidelity DL models. However, the discrepancy inevitably exists between the synthesized and real bone samples, which would compromise the prediction accuracy if QCT-based DL models were trained using the synthesized dataset alone.
Recently, deep transfer learning has increasingly gained attentions since it gives rise to a new opportunity to use synthesized dataset in training high fidelity DL models. Deep transfer learning is an advanced machine learning method that gains knowledge while solving one problem from one domain and applies it to solve a different but related problem on another domain [32]. Since deep transfer learning models relax the assumption that the probabilistic distribution of the training dataset must be identical with that of the testing dataset, the demand of real bone samples from the target domain for training high-fidelity deep learning models could be significantly reduced [29]. A previous study has shown that the training dataset is not required to be identically distributed with the testing dataset for training high fidelity deep learning models [24]. For instance, it was reported that deep transfer learning models trained using abundant synthesized images and a small number of real images produced a prediction accuracy similar to those trained using real images [12]. This discovery has motivated us to use deep transfer learning techniques to address the ‘big data’ issues.
To this end, this study was performed to prove the concept that assisted with a generative model, high-fidelity deep transfer learning models could be trained to predict the stiffness tensor of trabecular bone cubes using a significantly reduced size of dataset of real bone samples.
Comments (0)