关键词: HPO Llama 2 concept normalization fine-tuning large language model

Mesh : Humans Rare Diseases Natural Language Processing Biological Ontologies Vocabulary, Controlled Phenotype

来  源:   DOI:10.1093/jamia/ocae133   PDF(Pubmed)

Abstract:
OBJECTIVE: We aim to develop a novel method for rare disease concept normalization by fine-tuning Llama 2, an open-source large language model (LLM), using a domain-specific corpus sourced from the Human Phenotype Ontology (HPO).
METHODS: We developed an in-house template-based script to generate two corpora for fine-tuning. The first (NAME) contains standardized HPO names, sourced from the HPO vocabularies, along with their corresponding identifiers. The second (NAME+SYN) includes HPO names and half of the concept\'s synonyms as well as identifiers. Subsequently, we fine-tuned Llama 2 (Llama2-7B) for each sentence set and conducted an evaluation using a range of sentence prompts and various phenotype terms.
RESULTS: When the phenotype terms for normalization were included in the fine-tuning corpora, both models demonstrated nearly perfect performance, averaging over 99% accuracy. In comparison, ChatGPT-3.5 has only ∼20% accuracy in identifying HPO IDs for phenotype terms. When single-character typos were introduced in the phenotype terms, the accuracy of NAME and NAME+SYN is 10.2% and 36.1%, respectively, but increases to 61.8% (NAME+SYN) with additional typo-specific fine-tuning. For terms sourced from HPO vocabularies as unseen synonyms, the NAME model achieved 11.2% accuracy, while the NAME+SYN model achieved 92.7% accuracy.
CONCLUSIONS: Our fine-tuned models demonstrate ability to normalize phenotype terms unseen in the fine-tuning corpus, including misspellings, synonyms, terms from other ontologies, and laymen\'s terms. Our approach provides a solution for the use of LLMs to identify named medical entities from clinical narratives, while successfully normalizing them to standard concepts in a controlled vocabulary.
摘要:
目的:我们旨在通过微调Llama2(一种开源大型语言模型(LLM))来开发一种用于罕见疾病概念规范化的新颖方法,使用源自人类表型本体论(HPO)的领域特定语料库。
方法:我们开发了一个基于内部模板的脚本,以生成两个语料库进行微调。第一个(NAME)包含标准化的HPO名称,来自HPO词汇,以及它们相应的标识符。第二个(NAME+SYN)包括HPO名称和一半的概念同义词以及标识符。随后,我们对每个句子集的Llama2(Llama2-7B)进行了微调,并使用一系列句子提示和各种表型术语进行了评估.
结果:当用于标准化的表型术语包含在微调语料库中时,两种型号都表现出近乎完美的性能,平均准确率超过99%。相比之下,ChatGPT-3.5在识别表型术语的HPOID方面只有约20%的准确性。当在表型术语中引入单字符错别字时,NAME和NAME+SYN的准确率分别为10.2%和36.1%,分别,但增加到61.8%(NAME+SYN)与额外的排字特定的微调。对于来自HPO词汇的术语,作为看不见的同义词,NAME模型达到11.2%的准确率,而NAME+SYN模型实现了92.7%的准确率。
结论:我们的微调模型证明了将微调语料库中看不到的表型术语标准化的能力,包括拼写错误,同义词,来自其他本体论的术语,和外行的条款。我们的方法为使用LLM从临床叙述中识别命名的医疗实体提供了解决方案,同时成功地将它们标准化为受控词汇中的标准概念。
公众号