Stöger K, Schneeberger D, Holzinger A. Medical artificial intelligence: the European legal perspective. Commun ACM. 2021;64(11):34–6.
Tjoa E, Guan C. A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI. CoRR. 2021. abs/1907.07374. http://arxiv.org/abs/1907.07374. Accessed 2024.
Saeed W, Omlin CW. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. ArXiv abs/2111.06420. 2021.
Cyras K, Rago A, Albini E, Baroni P, Toni F. Argumentative XAI: A survey. ArXiv abs/2105.11266. 2021.
Johnson RH. Manifest Rationality: A Pragmatic Theory of Argument, p. 408. New York: Lawrence Earlbaum Associates; 2012. https://doi.org/10.4324/9781410606174.
Rago A, Cocarascu O, Toni F. Argumentation-Based Recommendations: Fantastic Explanations and How to Find Them. In: Lang J, editor. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018. Stockholm; 2018. pp. 1949–55. https://doi.org/10.24963/ijcai.2018/269.
Rago A, Cocarascu O, Bechlivanidis C, Lagnado DA, Toni F. Argumentative explanations for interactive recommendations. Artif Intell. 2021;296:103506. https://doi.org/10.1016/j.artint.2021.103506.
Article MathSciNet Google Scholar
Vassiliades A, Bassiliades N, Patkos T. Argumentation and explainable artificial intelligence: a survey. Knowl Eng Rev. 2021;36:e5. https://doi.org/10.1017/S0269888921000011.
Jin D, Pan E, Oufattole N, Weng WH, Fang H, Szolovits P. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. Appl Sci. 2021;11(14):6421.
Bodenreider O. The unified medical language system (UMLS): integrating biomedical terminology. Nucleic Acids Res. 2004;32(suppl_1):D267–D270.
Marro S, Molinet B, Cabrio E, Villata S. Natural Language Explanatory Arguments for Correct and Incorrect Diagnoses of Clinical Cases. In: ICAART 2023 - 15th International Conference on Agents and Artificial Intelligence. Proceedings of the 15th International Conference on Agents and Artificial Intelligence. 2023;1:438–49. Lisbon (Portugal), Portugal.
Nadkarni PM, Ohno-Machado L, Chapman WW. Natural language processing: an introduction. J Am Med Inform Assoc. 2011;18(5):544–51.
Friedman CP, Elstein AS, Wolf FM, Murphy GC, Franz TM, Heckerling PS, et al. Enhancement of clinicians’ diagnostic reasoning by computer-based consultation: a multisite study of 2 systems. Jama. 1999;282(19):1851–6.
Berner ES, Webster GD, Shugerman AA, Jackson JR, Algina J, Baker AL, et al. Performance of four computer-based diagnostic systems. N Engl J Med. 1994;330(25):1792–6.
Wang H, Li Y, Khan SA, Luo Y. Prediction of breast cancer distant recurrence using natural language processing and knowledge-guided convolutional neural network. Artif Intell Med. 2020;110:101977.
Rumshisky A, Ghassemi M, Naumann T, Szolovits P, Castro V, McCoy T, et al. Predicting early psychiatric readmission with natural language processing of narrative discharge summaries. Transl Psychiatry. 2016;6(10):e921.
Feller DJ, Zucker J, Yin MT, Gordon P, Elhadad N. Using clinical notes and natural language processing for automated HIV risk assessment. J Acquir Immune Defic Syndr (1999). 2018;77(2):160.
Bacchi S, Oakden-Rayner L, Zerner T, Kleinig T, Patel S, Jannes J. Deep learning natural language processing successfully predicts the cerebrovascular cause of transient ischemic attack-like presentations. Stroke. 2019;50(3):758–60.
Plass M, Kargl M, Kiehl TR, Regitnig P, Geißler C, Evans T, et al. Explainability and causability in digital pathology. J Pathol Clin Res. 2023;9(4):251–60.
Donnelly K, et al. SNOMED CT: The advanced terminology and coding system for eHealth. Stud Health Technol Inform. 2006;121:279.
Quan H, Sundararajan V, Halfon P, Fong A, Burnand B, Luthi JC, et al. Coding algorithms for defining comorbidities in ICD-9-CM and ICD-10 administrative data. Med Care. 2005;43(11):1130–9.
Liu S, Ma W, Moore R, Ganesan V, Nelson S. RxNorm: prescription for electronic drug information exchange. IT Prof. 2005;7(5):17–23.
Hirsch JA, Leslie-Mazwi TM, Nicola GN, Barr RM, Bello JA, Donovan WD, et al. Current procedural terminology; a primer. J Neurointerventional Surg. 2015;7(4):309–12.
Köhler S, Gargano M, Matentzoglu N, Carmody LC, Lewis-Smith D, Vasilevsky NA, et al. The human phenotype ontology in 2021. Nucleic Acids Res. 2021;49(D1):D1207–17.
Sun W, Rumshisky A, Uzuner O. Evaluating temporal relations in clinical text: 2012 i2b2 challenge. J Am Med Inform Assoc. 2013;20(5):806–13.
Henry S, Buchan K, Filannino M, Stubbs A, Uzuner O. 2018 n2c2 shared task on adverse drug events and medication extraction in electronic health records. J Am Med Inform Assoc. 2020;27(1):3–12.
Ben Abacha A, Shivade C, Demner-Fushman D. Overview of the MEDIQA 2019 shared task on textual inference, question entailment and question answering. In: Demner-Fushman, D., Cohen, K.B., Ananiadou, S., Tsujii, J. (eds.) Proceedings of the 18th BioNLP Workshop and Shared Task. Florence: Association for Computational Linguistics; 2019. pp. 370–379.
Khetan V, Wadhwa S, Wallace B, Amir S. SemEval-2023 task 8: Causal medical claim identification and related PIO frame extraction from social media posts. In: Ojha, A.K., Do ̆gru ̈oz, A.S., Da San Martino, G., Tayyar Madabushi, H., Kumar, R., Sartori, E. (eds.) Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023). Toronto: Association for Computational Linguistics; 2023. pp. 2266–2274.
Jullien M, Valentino M, Frost H, O’Regan P, Landers D, Freitas A. SemEval-2023 Task 7: Multi-Evidence Natural Language Inference for Clinical Trial Data. 2023. arXiv preprint arXiv:230502993.
Wadhwa S, Khetan V, Amir S, Wallace B. Redhot: A corpus of annotated medical questions, experiences, and claims on social media. 2022. arXiv preprint arXiv:221006331.
Johnson AE, Pollard TJ, Shen L, Lehman LwH, Feng M, Ghassemi M, et al. MIMIC-III, a freely accessible critical care database. Sci Data. 2016;3(1):1–9.
Pollard TJ, Johnson AE, Raffa JD, Celi LA, Mark RG, Badawi O. The eICU Collaborative Research Database, a freely available multi-center database for critical care research. Sci Data. 2018;5(1):1–13.
Moody GB, Mark RG, Goldberger AL. PhysioNet: a web-based resource for the study of physiologic signals. IEEE Eng Med Biol Mag. 2001;20(3):70–5.
Sudlow C, Gallacher J, Allen N, Beral V, Burton P, Danesh J, et al. UK biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Med. 2015;12(3):e1001779.
Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, et al. The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. J Digit Imaging. 2013;26:1045–57.
Ohta T, Tateisi Y, Kim JD, Mima H, Tsujii J. The GENIA corpus: An annotated research abstract corpus in molecular biology domain. In: Proceedings of the human language technology conference. San Francisco: Morgan Kaufmann Publishers Inc.,; 2002. pp. 73–7.
Bada M, Eckert M, Evans D, Garcia K, Shipley K, Sitnikov D, et al. Concept annotation in the CRAFT corpus. BMC Bioinforma. 2012;13(1):1–20.
Pyysalo S, Ananiadou S. Anatomical entity mention recognition at literature scale. Bioinformatics. 2014;30(6):868–75.
Kim J.-D, Ohta T, Pyysalo S, Kano Y, Tsujii J. Overview of BioNLP’09 shared task on event extraction. In: Proceedings of the BioNLP 2009 Workshop Companion Volume for Shared Task, pp. 1–9. Omnipress, Portland, Oregon, United States; 2009.
Kim JD, Wang Y, Yasunori Y. The genia event extraction shared task, 2013 edition-overview. In: Proceedings of the BioNLP Shared Task 2013 Workshop. 2013. pp. 8–15.
Smith L, Tanabe LK, Kuo CJ, Chung I, Hsu CN, Lin YS, et al. Overview of BioCreative II gene mention recognition. Genome Biol. 2008;9(2):1–19.
Li J, Sun Y, Johnson RJ, Sciaky D, Wei CH, Leaman R, et al. BioCreative V CDR task corpus: a resource for chemical disease relation extraction. Database. 2016; p. baw068.
Kim J.-D, Ohta T, Tsuruoka Y, Tateisi Y, Collier N. Introduction to the bio-entity recognition task at JNLPBA. In: Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and Its Applications. JNLPBA ’04, pp. 70–75. USA: Association for Computational Linguistics; 2004.
Pyysalo S, Ohta T, Miwa M, Tsujii J. Towards exhaustive protein modification event extraction. In: Proceedings of BioNLP 2011 Workshop. United Kingdom: Oxford University Press; 2011. pp. 114–123.
Krallinger M, Rabal O, Leitner F, Vazquez M, Salgado D, Lu Z, et al. The CHEMDNER corpus of chemicals and drugs and its annotation principles. J Cheminformatics. 2015;7(1):1–17.
Gerner M, Nenadic G, Bergman CM. LINNAEUS: a species name identification system for biomedical literature. BMC Bioinforma. 2010;11(1):1–17.
Doğan RI, Leaman R, Lu Z. NCBI disease corpus: a resource for disease name recognition and concept normalization. J Biomed Inform. 2014;47:1–10.
Honnibal M, Montani I. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. neural machine translation. In: Proceedings of the Association for Computational Linguistics (ACL); 2017. pp. 688–697.
Eyre H, Chapman AB, Peterson KS, Shi J, Alba PR, Jones MM, et al. Launching into clinical space with medspaCy: a new clinical text processing toolkit in Python. In: AMIA Annual Symposium Proceedings. vol. 2021. 6218 Georgia Avenue NW, Suite 1 PMB 3077 Washington, DC 20011. American Medical Informatics Association; 2021. p. 438.
Soysal E, Wang J, Jiang M, Wu Y, Pakhomov S, Liu H, et al. CLAMP-a toolkit for efficiently building customized clinical natural language processing pipelines. J Am Med Inform Assoc. 2018;25(3):331–6.
Naseem U, Khushi M, Reddy VB, Rajendran S, Razzak I, Kim J. BioALBERT: A Simple and Effective Pre-trained Language Model for Biomedical Named Entity Recognition. In: 2021 International Joint Conference on Neural Networks (IJCNN). 2021. pp. 1–7.
Kanakarajan Kr, Kundumani B, Sankarasubbu M. BioELECTRA:pretrained biomedical text encoder using discriminators. In: Demner-Fushman, D., Cohen, K.B., Ananiadou, S., Tsujii, J. (eds.) Proceedings of the 20th Workshop on Biomedical Language Processing. Association for Computational Linguistics, Online; 2021. pp. 143–154.
Beltagy I, Lo K, Cohan A. SciBERT: A pretrained language model for scientific text. 2019;3615–20. https://doi.org/10.18653/v1/D19-1371.
Lee J, Yoon W, Kim S, Kim D, Kim S, So CH, et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics. 2020;36(4):1234–40.
Gu Y, Tinn R, Cheng H, Lucas M, Usuyama N, Liu X, Naumann T, Gao J, Poon H. Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare (HEALTH). 2011;3(1):1–23.
Gururangan S, Marasovi ́c A, Swayamdipta S, Lo K, Beltagy I, Downey D, Smith NA. Don’t stop pretraining: Adapt language models to domains and tasks. In: Jurafsky, D., Chai, J., Schluter, N., Tetreault, J. (eds.) Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online; 2020. pp. 8342–8360.
Michalopoulos G, Wang Y, Kaka H, Chen H, Wong A. UmlsBERT: Clinical domain knowledge augmentation of contextual embeddings using the Unified Medical Language System Metathesaurus. In: Toutanova, K., Rumshisky, A., Zettlemoyer, L., Hakkani-Tur, D., Beltagy, I., Bethard, S., Cotterell, R., Chakraborty, T., Zhou, Y. (eds.) Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Online; 2021. pp. 1744–1753. https://doi.org/10.18653/v1/2021.naacl-main.139. https://aclanthology.org/2021.naacl-main.139.
Raza S, Reji DJ, Shajan F, Bashir SR. Large-scale application of named entity recognition to biomedicine and epidemiology. PLoS Digit Health. 2022;1(12):e0000152.
Sanh V, Debut L, Chaumond J, Wolf T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. 2019. arXiv preprint arXiv:191001108.
Manzini E, Garrido-Aguirre J, Fonollosa J, Perera-Lluna A. Mapping layperson medical terminology into the Human Phenotype Ontology using neural machine translation models. Expert Syst Appl. 2022;204:117446.
Patterson OV, Jones M, Yao Y, Viernes B, Alba PR, Iwashyna TJ, et al. Extraction of vital signs from clinical notes. Stud Health Technol Inform. 2015;216:1035.
Gavrilov D, Gusev A, Korsakov I, Novitsky R, Serova L. Feature extraction method from electronic health records in Russia. In: Conference of Open Innovations Association, FRUCT. Helsinki: FRUCT Oy; 2020. pp. 497–500. e-ISSN 2343-073.
Maffini MD, Ojea FA, Manzotti M. Automatic Detection of Vital Signs in Clinical Notes of the Outpatient Settings. In: MIE. 2020. pp. 1211–2.
Genes N, Chandra D, Ellis S, Baumlin K. Validating emergency department vital signs using a data quality engine for data warehouse. Open Med Inform J. 2013;7:34.
Camburu O.-M, Rockt ̈aschel T, Lukasiewicz T, Blunsom P. e-snli: natural language inference with natural language explanations. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. NIPS’18, pp. 9560–9572. Red Hook: Curran Associates Inc.,; 2018.
Bowman S, Angeli G, Potts C, Manning C. A large annotated corpus for learning natural language inference, 632–642 (2015) https://doi.org/10.18653/v1/d15-1075. Publisher Copyright: © 2015 Association for Computational Linguistics.; Conference on Empirical Methods in Natural Language Processing, EMNLP 2015 ; Conference date: 17-09-2015 Through 21-09-2015
Kumar S, Talukdar P. NILE : Natural language inference with faithful natural language explanations. In: Jurafsky, D., Chai, J., Schluter, N., Tetreault, J. (eds.) Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online; 2020. pp. 8730–8742. https://doi.org/10.18653/v1/2020.acl-main.771. https://aclanthology.org/2020.acl-main.771.
Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I, et al. Language models are unsupervised multitask learners. OpenAI Blog. 2019;1(8):9.
Narang S, Raffel C, Lee K, Roberts A, Fiedel N, Malkan K. WT5?! Training Text-to-Text Models to Explain their Predictions; 2020.
Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. 2019. arXiv preprint arXiv:191010683.
Josephson JR, Josephson SG. Abductive inference: Computation, Philosophy, Technology. 1994.
Campos DG. On the distinction between Peirce’s abduction and Lipton’s inference to the best explanation. Synthese. 2011;180(3):419–42.
Comments (0)