Interpretative applications of artificial intelligence in musculoskeletal imaging: concepts, current practice, and future directions
Review Article

Interpretative applications of artificial intelligence in musculoskeletal imaging: concepts, current practice, and future directions

Teresa T. Martin-Carreras1, Hongming Li1,2, Po-Hao Chen3

1Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA; 2Center for Biomedical Image Computing & Analytics, University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA, USA; 3Department of Radiology, Cleveland Clinic, Cleveland, Ohio, USA

Contributions: (I) Conception and design: TT Martin-Carreras, PH Chen; (II) Administrative support: TT Martin-Carreras; (III) Provision of study materials or patients: None; (IV) Collection and assembly of data: All authors; (V) Data analysis and interpretation: None; (VI) Manuscript writing: All authors; (VII) Final approval of manuscript: All authors.

Correspondence to: Teresa T. Martin-Carreras, MD. Department of Radiology, University of Wisconsin School of Medicine and Public Health, Division of Musculoskeletal Imaging and Intervention, 600 Highland Ave., Madison, WI 53792, USA. Email: TMartin-Carreras@uwhealth.org.

Abstract: Artificial intelligence (AI) promises wide-reaching impacts on the field of radiology, and has the potential to influence every aspect of image interpretation. In recent decades, significant advancements in computing power, combined with the availability of large data stores or “Big Data” and algorithm democratization have revolutionized AI and machine learning (ML). Research applications utilizing these technological advancements are booming, and their adoption is expected to continue to rise at a rapid pace. While AI and ML have impacted many components of the imaging value chain, the purpose of this article is to discuss interpretative uses of the technology as it relates to musculoskeletal (MSK) radiology. This review provides a general introduction to AI and ML concepts, and highlights the major promises, challenges, and anticipated future applications of these developments in MSK radiology. AI and ML advances for image interpretation can increase the value that MSK radiologists provide to their patients, referring clinicians, and organizations by increasing diagnostic accuracy while decreasing turnaround times, enhancing image processing and quantitative analysis, and by potentially improving patient outcomes. Familiarity with these processes among MSK clinicians and researchers will be paramount to the improvement and implementation of these new techniques into the clinical practice. Radiology departments, practices and practitioners who embrace these technologies now will be well-suited to lead this influential change in our field in the near future.

Keywords: Artificial intelligence (AI); machine learning (ML); deep learning; musculoskeletal system (MSK system); radiology


Received: 07 April 2020; Accepted: 30 July 2020; Published: 30 September 2020.

doi: 10.21037/jmai-20-30


Introduction

One of the most promising and thriving areas of innovation in healthcare is the implementation of artificial intelligence (AI) and machine learning (ML) techniques for medical image analysis. Advancements in computing power coupled with the increased availability of large data stores or “Big Data” have also revolutionized AI and ML applications in medical imaging. Similarly, AI democratization, the idea that AI processes such as data and algorithms should be made available for a wider range of uses and users has garnered increased attention, with many institutions following suit by making large datasets publicly available for algorithm development (1,2). Over the last decade, research publications on the use of AI in radiology have more than doubled demonstrating a rapidly growing trend. Pesapane et al., recently evaluated the number of AI-related articles indexed on EMBASE stratified by radiological subspecialty/body part. As of 2017, neuroradiology outperformed all other radiological subspecialties at 34% of AI-related radiology publications, however, the bone, spine and joints category had the second greatest number of AI-related articles at 9% of publications (3). This promising data, coupled with breadth of applications discussed in this article, showcase how MSK radiology is uniquely positioned to be a leading subspecialty in the application of these techniques (3-5).

In radiology, the imaging value chain refers to a series of discrete tasks which together serve to facilitate volume-to-value healthcare, and which aim to create value for the organization, the referring provider and the patient (6-12). While AI and ML have been leveraged to optimize many links in the imaging value chain, this article focuses on the image interpretation aspects of the chain, and discusses the interpretative uses of AI in MSK radiology. This article also provides a general introduction of AI and ML topics, and highlights the major promises, challenges, and anticipated future directions of these techniques in musculoskeletal (MSK) radiology. The authors aim to complement and expand upon the current literature with an accessible foundational review of ML for the MSK radiologist, and an emphasis on interpretative use-cases.

References were acquired using the PubMed database. Combinations of the following search terms were used: AI, radiology, ML, MSK, deep learning, sarcoma, radiomics, bone(s), and muscle(s). Further sub-selection of MSK-related references was made to primarily include articles with a focus on interpretative uses of AI in MSK imaging. For topics applicable to all medical imaging subspecialties such as basic AI concepts, challenges, big data, and algorithm democratization, general references not specific to MSK imaging were also used.


Overview

The term artificial intelligence (AI) can be broadly applied when a device performs functions which mimic cognitive functions such as problem-solving. Researchers often describe AI as a branch of computer science dedicated to creating systems which perform tasks that generally require human intelligence (3). Although self-learning is not a pre-requisite for AI, the best-known recent advancements in radiology AI have been in ML. ML is often considered a subfield of AI, and refers to a system capable of self-learning (3). An overview of the relationship between AI and its subcategories, as well as representative examples of algorithm subtypes for each ML task is shown in Figure 1. Although ML algorithms differ in complexity and methodology, all of them generate a model representation based on the provided input data. The ultimate goal for these algorithms is to achieve accurate outputs when provided with previously unseen testing data. This fundamental ability of ML to “learn from” and “respond to” large real-world data using statistical approximation is at the core of its robustness. Many ML tasks can be further subdivided into three principal categories: supervised learning, unsupervised learning, and reinforcement learning. Individual algorithms may use a combination of supervised and unsupervised learning methods, with or without a reinforcement feedback loop (13,14).

Figure 1 Overview of the relationship between artificial intelligence (AI), types of machine learning (ML) tasks, and ML subcategories.

Supervised Learning

The hallmark of supervised learning is the reliance on “ground truth”, or data which the algorithm believes to be accurate. In radiology, “ground truth” typically refers to radiologists’ image annotations, results of radiology reports, or histopathological diagnoses.

In supervised learning, the algorithms are provided with training data, which pairs the source data with its intended output. In imaging interpretation, the radiology images comprise the data, and individual findings or diagnoses may comprise the intended outputs. For instance, Forsberg et al. used a supervised algorithm for vertebral body detection and labeling using labeled spine data annotations stored a single institution’s imaging archive as the training data (15).

Unsupervised learning

In unsupervised learning, the algorithm does not rely on labeled data. Instead, the algorithm identifies patterns in large datasets and separates items into groups based on similarities and differences (14,16). In MSK imaging, Mandava et al. produced a dynamic unsupervised clustering algorithm to automatically segment osteosarcoma versus non-tumor regions on MRI images. Their algorithm also differentiated between viable and non-viable/necrotic tissue within the tumor, an important marker of treatment response for assessment of drug-induced tumor necrosis. In the study, they used multi-spectral information from STIR and T2-weighted MRI sequences and a dynamic clustering algorithm to automatically segment osteosarcomas. After the ML algorithm performed segmentation, additional algorithm components analyzed texture features, and pixel intensity values to delineate the tumor volume. For validation purposes, the automatic algorithm results were compared against manual delineations from a radiologist. The authors found high similarity between their methodology and ROIs drawn manually (Dice coefficient of 0.72) (17).

Reinforcement learning

Reinforcement learning refers to ML algorithms which learn from the consequences of interactions with the environment without being explicitly trained. Each action will affect the next, and the algorithm receives feedback in the form of positive or negative reinforcement (18). For the MSK system, reinforcement learning represents a way to simulate physiologic function rather than morphology-based approaches seen in traditional image analysis. For instance, in an effort to expand reinforcement learning algorithms within the field of medicine, in 2017 the Learning to Run challenge at the Neural Information Processing Systems conference tasked competitors with developing AI to control a physiologically-based MSK model, and to make it run as far as possible through an obstacle course. “Obstacles” were both external and internal modifiers including steps, slippery floors, and muscle weakness. Of all participants, eight teams’ top performing reinforcement learning models traveled at least 15 meters within 10 seconds (19).

Artificial neural networks (ANNs) and deep learning

ANNs are a subset of ML that has seen recent success in computer vision, a type of AI application that translates fittingly to medical image interpretation. They are computational platforms inspired by, although not technically analogous to, the brain’s neuronal functions. They process information through stacks of highly interconnected processing elements referred to as artificial neurons, perceptrons, or simply as nodes (20). A modern ANN is structured as one input layer, one or more “hidden layers,” and one output layer. Each connection strength is weighted, and weights are determined by an iterative training process using a large amount of input data with known outputs (14,21,22). The term deep learning refers to a subset of ANN algorithms which typically contain many more “hidden” layers, and are thus regarded as “deep”. Convolutional neural networks (CNNs) represent a type of deep learning ANN algorithm (Figure 2), and have garnered significant interest for medical image analysis (14,22). A convolutional layer performs a transformation at each pixel, determining its value by those of the neighboring pixels. Additional layer types known as pooling layers can be used to combine pixel values depending on the maximum or the average of its neighbors (Figure 3) (23).

Figure 2 Illustration of a convolutional neural network (CNN) for prediction (classification or regression) at the image level.
Figure 3 Illustration of a neural network for pixel/voxel-wise prediction (i.e. segmentation) in a patient with high-grade myxofibrosarcoma of the thigh.

Opportunities

Although there have been many promises linked to AI in medicine, we will focus on the major promises associated with value creation for interpretative uses of MSK imaging AI in radiology: increased diagnostic accuracy with decreased turnaround times, enhanced image processing and quantitative analysis, and the potential for improved patient outcomes (5,24).

Increased diagnostic accuracy with decreased turnaround time

Bone age assessment was one of the first radiologic procedures to be considered for automation when ML models became of use. To date, several studies have successfully used deep learning CNNs to estimate skeletal maturity with accuracy similar to that of expert radiologists, and some much more efficiently (25,26). Similarly, deep learning algorithms have been effectively used to classify acute and non-acute pediatric elbow fractures in the setting of trauma, successfully distinguishing true fractures from open growth plates with an AUC of 0.95 and accuracy of 88% (27). These deep learning models have the potential to improve accuracy of fracture diagnosis, and also to decrease overall turnaround times in high-volume emergency departments and urgent care centers. They also stand to facilitate patient care and treatment at facilities without access to on-site trained radiologists, and where there is a substantial need for the accurate disposition of patients at the point-of-care.

More recently, Roblot et al., built and evaluated a deep learning algorithm using a CNN which could in concert detect the position of the meniscal horns, the presence of a tear, and determine the orientation of the tear. Their algorithm yielded AUC values of 0.92, 0.94, and 0.83, respectively, and a final weighted AUC of 0.90 for the combined tasks. The group’s work highlights the emergence of more end-to-end AI powered diagnostic tools (28).

Enhanced image processing and quantitative analysis

Broadly, enhancements in image processing and quantitative analysis techniques are those that improve image quality and can facilitate interpretation. ML applications have been successfully applied to reconstruct MR images from accelerated image acquisitions which subsample k-space (29). ML models have also been successfully applied to image segmentation. Clinically, segmentation is key to chemoradiation treatment planning and response, it can provide prognostic information, and also assess therapeutic response. For example, studies have successfully employed automated and semi-automated algorithms to assess osteosarcoma and soft tissue sarcoma response to treatment by comparing the extent of tumor necrosis as determined by the algorithm with histopathologist assessment at the time of tumor resection (30,31).

Image segmentation is also a crucial and initial step in quantitative analysis and image post-processing used for extracting clinically relevant data for use in other parts of the ML pipeline. Studies have demonstrated good performance of CNNs in segmenting knee joint anatomy (32). A recent study by Liu et al., went a step beyond segmentation and combined segmentation and classification CNNs to detect cartilage lesions within the knee joint on MR images. The researchers retrospectively analyzed fat-suppressed T2-weighted fast spin-echo MRI datasets of the knee from 175 patients with knee pain utilizing a CNN for segmenting cartilage and bone, followed by a second CNN classification network to detect structural abnormalities within the segmented cartilage tissue. The reference standard used for the CNN classification was the interpretation provided by a fellowship-trained MSK radiologist of the presence or absence of a cartilage lesion within 17,395 small image patches placed on the articular surfaces of the femur and tibia. The study’s deep learning approached achieved high diagnostic performance with AUCs above 0.91 for detecting cartilage lesions, and good intraobserver agreement with κ of 0.76 (33).

Potential for improved patient outcomes

Compelling results have also been obtained with interpretative analysis of cross-sectional examinations. A recent study used a classification supervised deep learning model to evaluate knee MRI pathology. Using a CNN, this model showed a mean increase in ACL detection specificity of 4.8%. In the study, this finding would clinically translate to potentially three fewer patients who unnecessarily underwent surgery for suspected ACL tears, a finding which can obviate unwarranted operative morbidity for patients. The trained algorithm also provided the results for 120 knee MRI exams in under 2 minutes, whereas the human experts required more than 3 hours to perform the same task (34).

The distribution of fat and muscle in the body has been linked to patient outcomes in several conditions (35). Sarcopenia has also been associated with poor patient outcomes after major surgery (36). To this end, a recent study employed a CNN to segment and quantify body composition. Their model showed that the performance of a fully automated algorithm for this task met or exceeded expert manual segmentation (35). These fully automated models can catapult additional applications and research of these biomarkers in large populations, and may serve to improve patient outcomes. For example, such a model could allow for improved detection of sarcopenic patients at greatest risk for perioperative morbidity who may benefit from preoperative rehabilitation.


Future directions and challenges

Creating better datasets

While both hardware and ML algorithm improvements have broadly benefited AI advancements in radiology, and while these improvements have led to an increase in open-sourcing of datasets, database requirements and access are generally subspecialty-specific. These databases also vary widely in scope and content. Publicly available MSK imaging data sets include the MRNet labeled dataset consisting of 1,370 knee MRI exams with cases including ACL and meniscal tears (34), the Osteoarthritis Initiative multi-center dataset consisting of more than 26 million radiographs and MRI images with patient reported outcomes and biospecimen analyses (37), and the fast MRI dataset with images drawn from 10,000 scans, and which also includes evaluation metrics and baseline algorithms for use (38). However, datasets of similar scale do not currently exist for rarer diseases such as bone tumors and rheumatologic diseases. Efforts are underway which stand to benefit the aggregation and accessibility of rare disease datasets. For example, the Cancer Imaging Archive (TCIA) offers a platform to openly publish data. A search for sarcoma on TCIA at the time of this writing returns 150 cases of bone and soft tissue sarcomas, with more than half of the cases published in 2019 alone (39).

Radiomics and precision medicine

With better datasets, AI and ML models hold promise in answering numerous oncologic questions that will influence clinical decision making, particularly as it relates to treatment response and prognostic determinations. The field of radiomics extracts information from clinical images for use as measurable imaging biomarkers (40). Radiomics studies consist of three main parts; tumor segmentation (discussed previously), extracting image features, and statistical analysis/modeling of these findings (41).

Through the extraction of basic quantitative features (i.e., size, shape, intensity), as well as more complex features derived from an assortment of statistical approaches, determinations about tumor classification, treatment response and prognosis can be made (Figure 4).

Figure 4 Pipeline of radiomic analysis, including segmentation, feature extraction, and predictive modeling in a patient with high-grade myxofibrosarcoma of the thigh.

In MSK imaging, radiomics has shown success in soft tissue sarcomas for use cases such as analysis of pulmonary metastases, differentiating benign from malignant myxomatous tumors, and histologic grade prediction using imaging features on MRI and FDG-PET (42-44). While radiomics does not require the use of AI techniques, several studies have shown success in combining these techniques. For example, a recent study by Yin et al., assessed the optimal ML methods for the preoperative evaluation of sacral chordoma and sacral giant cell tumor based on enhanced and unenhanced CT features (45). Additional studies have also successfully combined radiomics and ML classifiers for differentiating metastatic and completely responded sclerotic lesions in prostate cancer, and discerning benign from malignant vertebral compressions fractures (46,47).

Reinventing imaging paradigms

A type of neural network generating more recent enthusiasm is the generative adversarial network (GAN). These algorithms represent a type of deep learning model which trains two competing networks concurrently: a generator network to synthesize data, and a discriminator network to differentiate between the synthesized data and the real data (48).

Applications of GANs to medical imaging are currently limited. However, a study by Nie et al. successfully employed a supervised GAN model to estimate brain, and more relevant to MSK, pelvis CT images from their corresponding MR images. Their method also outperformed three other methods by comparison (49).

These technologies have not yet been validated in a head-to-head comparison against conventionally acquired CT concerning the diagnostic accuracy of underlying disease. However, potential advantages of using GANs for synthetic CT generation from MR images include MR-only workflows, reduced costs to the patient and healthcare systems from performing multiple imaging exams, and reduced radiation dose to the patient. These applications can be useful in the adult and pediatric populations, but particularly beneficial to the pediatric population, as children are more susceptible to radiation effects than adults (50). For instance, a child undergoing an MR for clinical concern of osteomyelitis can be spared from a radiograph or CT if an osseous lesion requires further characterization, as the images can be generated directly from the MR data.


Conclusions

AI and ML models are on course to revolutionize the medical imaging industry. Improvements in computing power, increased access to large datasets, and algorithm democratization have revolutionized AI and ML. As such, research applications leveraging these technological advancements is predictably expanding, and their use is expected to continue to increase rapidly. However, many of the current applications remain in their early stages, and there exist more questions and uncertainty than answers. Today’s algorithms largely represent “narrow AI”, meaning that they only focus on narrow and explicit tasks. It will require many more years before these algorithms can attain a more generalized capacity to undertake a wide range of clinical questions, similar to medical subspecialists (51). As a result, for the foreseeable future, modern AI will not be capable of replacing all of a MSK radiologist’s work. In particular, the anatomic complexity, the multi-modality of MSK disease imaging, and the low incidence of many bone and soft tissue diseases all present unique challenges for ML. In the short-term, however, AI and ML advances for image interpretation are likely to continue to increase diagnostic accuracy with decreased turnaround times, enhance image processing and quantitative analysis, and potentially improve patient outcomes. We anticipate a synergistic future in which the interplay of radiologists and machines leads to better care and patient outcomes than can be achieved by either one independently. Radiology practices and practitioners that embrace and implement these technologies now will be poised to lead this transformative change in the coming years.


Acknowledgments

Funding: None.


Footnote

Peer Review File: Available at http://dx.doi.org/10.21037/jmai-20-30

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at http://dx.doi.org/10.21037/jmai-20-30). PHC serves as an unpaid editorial board member of Journal of Medical Artificial Intelligence from Oct 2018 to Sep 2020. The other authors have no conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Allen B, Agarwal S, Kalpathy-Cramer J, et al. Democratizing AI. J Am Coll Radiol 2019;16:961-3. [Crossref] [PubMed]
  2. Kobayashi Y, Ishibashi M, Kobayashi H. How will “democratization of artificial intelligence” change the future of radiologists? Jpn J Radiol 2019;37:9-14. [Crossref] [PubMed]
  3. Pesapane F, Codari M, Sardanelli F. Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur Radiol Exp 2018;2:35. [Crossref] [PubMed]
  4. Gyftopoulos S, Lin D, Knoll F, et al. Artificial Intelligence in Musculoskeletal Imaging: Current Status and Future Directions. AJR Am J Roentgenol 2019;213:506-13. [Crossref] [PubMed]
  5. Syed AB, Zoga AC. Artificial Intelligence in Radiology: Current Technology and Future Directions. Semin Musculoskelet Radiol 2018;22:540-5. [Crossref] [PubMed]
  6. Boland GW, Enzmann DR, Duszak R. Actionable Reporting. J Am Coll Radiol 2014;11:844-5. [Crossref] [PubMed]
  7. Boland GW, Duszak R, Dreyer K. Appropriateness, Scheduling, and Patient Preparation. J Am Coll Radiol 2014;11:225-6. [Crossref] [PubMed]
  8. Boland GW, Thrall JH, Duszak R. Business Intelligence, Data Mining, and Future Trends. J Am Coll Radiol 2015;12:9-11. [Crossref] [PubMed]
  9. Boland GW, Duszak R, Larson PA. Communication of Actionable Information. J Am Coll Radiol 2014;11:1019-21. [Crossref] [PubMed]
  10. Boland GW, Duszak R, Mayo-Smith W. Optimizing Modality Operations. J Am Coll Radiol 2014;11:654-5. [Crossref] [PubMed]
  11. Boland GW, Duszak R, McGinty G, et al. Delivery of Appropriateness, Quality, Safety, Efficiency and Patient Satisfaction. J Am Coll Radiol 2014;11:7-11. [Crossref] [PubMed]
  12. Boland GW, Duszak R, Kalra M. Protocol Design and Optimization. J Am Coll Radiol 2014;11:440-1. [Crossref] [PubMed]
  13. Zaharchuk G, Gong E, Wintermark M, et al. Deep Learning in Neuroradiology. Am J Neuroradiol 2018;39:1776-84. [Crossref] [PubMed]
  14. Choy G, Khalilzadeh O, Michalski M, et al. Current Applications and Future Impact of Machine Learning in Radiology. Radiology 2018;288:318-28. [Crossref] [PubMed]
  15. Forsberg D, Sjöblom E, Sunshine JL. Detection and Labeling of Vertebrae in MR Images Using Deep Learning with Clinical Annotations as Training Data. J Digit Imaging 2017;30:406-12. [Crossref] [PubMed]
  16. Tack C. Artificial intelligence and machine learning | applications in musculoskeletal physiotherapy. Musculoskelet Sci Pract 2019;39:164-9. [Crossref] [PubMed]
  17. Mandava R, Alia OM, Wei BC, et al. Osteosarcoma segmentation in MRI using dynamic Harmony Search based clustering. 2010 International Conference of Soft Computing and Pattern Recognition [Internet]. Cergy-Pontoise, France: IEEE; 2010 [cited 2019 Aug 20]. p. 423-9. Available online: http://ieeexplore.ieee.org/document/5686624/
  18. Jonsson A. Deep Reinforcement Learning in Medicine. Kidney Dis (Basel) 2019;5:18-22. [Crossref] [PubMed]
  19. Kidziński Ł, Mohanty SP, Ong C, et al. Learning to Run challenge solutions: Adapting reinforcement learning methods for neuromusculoskeletal environments. ArXiv180400361 Cs Stat [Internet] 2018 Apr 1 [cited 2019 Aug 20]; Available online: http://arxiv.org/abs/1804.00361
  20. Wason R. Deep learning: Evolution and expansion. Cogn Syst Res 2018;52:701-8. [Crossref]
  21. Akay A, Hess H. Deep Learning: Current and Emerging Applications in Medicine and Technology. IEEE J Biomed Health Inform 2019;23:906-20. [Crossref] [PubMed]
  22. Shen D, Wu G, Suk HI. Deep Learning in Medical Image Analysis. Annu Rev Biomed Eng 2017;19:221-48. [Crossref] [PubMed]
  23. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. ArXiv150504597 Cs [Internet] 2015 May 18 [cited 2019 Apr 13]; Available online: http://arxiv.org/abs/1505.04597
  24. Thrall JH, Li X, Li Q, et al. Artificial Intelligence and Machine Learning in Radiology: Opportunities, Challenges, Pitfalls, and Criteria for Success. J Am Coll Radiol 2018;15:504-8. [Crossref] [PubMed]
  25. Ren X, Li T, Yang X, et al. Regression Convolutional Neural Network for Automated Pediatric Bone Age Assessment from Hand Radiograph. IEEE J Biomed Health Inform 2019;23:2030-8. [Crossref] [PubMed]
  26. Larson DB, Chen MC, Lungren MP, et al. Performance of a Deep-Learning Neural Network Model in Assessing Skeletal Maturity on Pediatric Hand Radiographs. Radiology 2018;287:313-22. [Crossref] [PubMed]
  27. Rayan JC, Reddy N, Kan JH, et al. Binomial Classification of Pediatric Elbow Fractures Using a Deep Learning Multiview Approach Emulating Radiologist Decision Making. Radiol Artif Intell 2019;1:e180015. [Crossref]
  28. Roblot V, Giret Y, Bou Antoun M, et al. Artificial intelligence to diagnose meniscus tears on MRI. Diagn Interv Imaging 2019;100:243-9. [Crossref] [PubMed]
  29. Hammernik K, Klatzer T, Kobler E, et al. Learning a variational network for reconstruction of accelerated MRI data: Learning a Variational Network for Reconstruction of Accelerated MRI Data. Magn Reson Med 2018;79:3055-71. [Crossref] [PubMed]
  30. Glass JO, Reddick WE. Hybrid artificial neural network segmentation and classification of dynamic contrast-enhanced MR imaging (DEMRI) of osteosarcoma. Magn Reson Imaging 1998;16:1075-83. [Crossref] [PubMed]
  31. Monsky WL, Jin B, Molloy C, et al. Semi-Automated Volumetric Quantification of Tumor Necrosis in Soft Tissue Sarcoma Using Contrast Enhanced MRI. Anticancer Res 2012;32:4951-61. [PubMed]
  32. Zhou Z, Zhao G, Kijowski R, et al. Magn Reson Med 2018;80:2759-70. [Crossref] [PubMed]
  33. Liu F, Zhou Z, Samsonov A, et al. Deep Learning Approach for Evaluating Knee MR Images: Achieving High Diagnostic Performance for Cartilage Lesion Detection. Radiology 2018;289:160-9. [Crossref] [PubMed]
  34. Bien N, Rajpurkar P, Ball RL, et al. Deep-learning-assisted diagnosis for knee magnetic resonance imaging: Development and retrospective validation of MRNet. Saria S, editor. PLoS Med 2018;15:e1002699.
  35. Weston AD, Korfiatis P, Kline TL, et al. Automated Abdominal Segmentation of CT Scans for Body Composition Analysis Using Deep Learning. Radiology 2019;290:669-79. [Crossref] [PubMed]
  36. Sheetz KH, Waits SA, Terjimanian MN, et al. Cost of Major Surgery in the Sarcopenic Patient. J Am Coll Surg 2013;217:813-8. [Crossref] [PubMed]
  37. National Institutes of Health. The Osteoarthritis Initiative [Internet]. Available online: https://nda.nih.gov/oai
  38. Zbontar J, Knoll F, Sriram A, et al. fastMRI: An Open Dataset and Benchmarks for Accelerated MRI. ArXiv181108839 Phys Stat [Internet] 2018 Nov 21 [cited 2019 Aug 21]; Available online: http://arxiv.org/abs/1811.08839
  39. National Cancer Institute CIP. The Cancer Imaging Archive (TCIA) [Internet]. Available online: https://www.cancerimagingarchive.net/
  40. Rudie JD, Rauschecker AM, Bryan RN, et al. Emerging Applications of Artificial Intelligence in Neuro- Oncology. Radiology 2019;290:607-18. [PubMed]
  41. Court LE, Fave X, Mackin D, et al. Computational resources for radiomics. Transl Cancer Res 2016;5:340-8. [Crossref]
  42. Vallières M, Freeman CR, Skamene SR, et al. A radiomics model from joint FDG-PET and MRI texture features for the prediction of lung metastases in soft-tissue sarcomas of the extremities. Phys Med Biol 2015;60:5471-96. [Crossref] [PubMed]
  43. Martin-Carreras T, Li H, Cooper K, et al. Radiomic features from MRI distinguish myxomas from myxofibrosarcomas. BMC Med Imaging 2019;19:67. [Crossref] [PubMed]
  44. Corino VDA, Montin E, Messina A, et al. Radiomic analysis of soft tissues sarcomas can distinguish intermediate from high-grade lesions: Soft Tissues Sarcomas Classified by Radiomics. J Magn Reson Imaging 2018;47:829-40. [Crossref] [PubMed]
  45. Yin P, Mao N, Zhao C, et al. Comparison of radiomics machine-learning classifiers and feature selection for differentiation of sacral chordoma and sacral giant cell tumour based on 3D computed tomography features. Eur Radiol 2019;29:1841-7. [Crossref] [PubMed]
  46. Acar E, Leblebici A, Ellidokuz BE, et al. Machine learning for differentiating metastatic and completely responded sclerotic bone lesion in prostate cancer: a retrospective radiomics study. Br J Radiol 2019;92:20190286. [Crossref] [PubMed]
  47. Frighetto-Pereira L, Rangayyan RM, Metzner GA, et al. Shape, texture and statistical features for classification of benign and malignant vertebral compression fractures in magnetic resonance images. Comput Biol Med 2016;73:147-56. [Crossref] [PubMed]
  48. Emami H, Dong M, Nejad-Davarani SP, et al. Generating synthetic CTs from magnetic resonance images using generative adversarial networks. Med Phys 2018;45:3627-36. [Crossref] [PubMed]
  49. Nie D, Trullo R, Lian J, et al. Medical Image Synthesis with Context-Aware Generative Adversarial Networks. In: Han Y, editor. Physics and Engineering of Metallic Materials [Internet]. Singapore: Springer Singapore; 2017 [cited 2019 Feb 24]. p. 417-25. Available online: http://link.springer.com/10.1007/978-3-319-66179-7_48
  50. Johnson C, Martin-Carreras T, Rabinowitz D. Pediatric Interventional Radiology and Dose-Reduction Techniques. Semin. Ultrasound CT MR 2014;35:409-14. [Crossref] [PubMed]
  51. Kohli M, Prevedello LM, Filice RW, et al. Implementing Machine Learning in Radiology Practice and Research. AJR Am J Roentgenol 2017;208:754-60. [Crossref] [PubMed]
doi: 10.21037/jmai-20-30
Cite this article as: Martin-Carreras TT, Li H, Chen PH. Interpretative applications of artificial intelligence in musculoskeletal imaging: concepts, current practice, and future directions. J Med Artif Intell 2020;3:13.

Download Citation