medical examination

体检
  • 文章类型: Journal Article
    为了评估可穿戴相机在医学检查中的实用性,我们创建了一个基于医生视图的视频考试问题和解释,调查结果表明,这些相机可以增强医学检查的评估和教育能力。
    UNASSIGNED: To assess the utility of wearable cameras in medical examinations, we created a physician-view video-based examination question and explanation, and the survey results indicated that these cameras can enhance the evaluation and educational capabilities of medical examinations.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    口腔和口咽鳞状细胞癌(OOPSCC)患者的牙科治疗对于牙医来说可能具有挑战性。这项研究旨在描述在癌症治疗之前接受牙科治疗的OOPSCC患者的全身性变化。特别关注实验室评估。主要目标包括识别潜在的不良事件,如感染或出血,由于牙科手术。此外,本研究旨在将基线患者特征与治疗相关毒性相关.这是一项前瞻性队列研究,包括110名OOPSCC患者,他们转诊到圣保罗州立癌症研究所的牙科肿瘤服务机构,巴西,2019年11月至2020年12月。合并症,社会人口统计数据,使用中的药物,癌症治疗相关的毒性,和改变的实验室测试结果是相关的。最常见的合并症和实验室检查结果的改变是高血压,血脂异常,糖尿病,以及C反应蛋白水平升高,血红蛋白,和血细胞比容.毒性随着时间的推移表现出渐进的模式,包括口腔粘膜炎(OM),口干症,吞咽困难,熟食症,刺耳,和放射性皮炎。合并症和癌症治疗相关毒性之间没有相关性,使用的药物与OM之间存在正相关,并且发现药物和味觉障碍之间存在负相关。OM与甲状腺素(T4)和游离甲状腺素(FT4)改变有关,钙,尿素,肌酐,碱性磷酸酶,还有梅毒.家庭收入和住房是OM预测因素。改变T4/FT4/尿素/钙/碱性磷酸酶/肌酐/梅毒可能是OM的有用临床预测因子。尽管合并症和异常实验室检查结果的患病率上升,癌症治疗前的牙科治疗未产生不良事件.
    The dental treatment of patients with oral cavity and oropharyngeal squamous cell carcinoma (OOPSCC) may be challenging for dentists. This study aimed to characterize systemic changes in patients with OOPSCC undergoing dental treatment prior to cancer therapy, with a specific focus on laboratory assessments. The primary objectives included identifying potential adverse events, such as infections or bleeding, resulting from dental procedures. Additionally, the study aimed to correlate baseline patient characteristics with treatment-related toxicities. This was a prospective cohort study that included 110 OOPSCC patients referred to the Dental Oncology Service at São Paulo State Cancer Institute, Brazil, between November/2019 and December/2020. Comorbidities, sociodemographic data, medication in use, cancer treatment-related toxicities, and altered laboratory tests results were correlated. The most common comorbidities and altered laboratory results were hypertension, dyslipidemia, diabetes, as well as elevated levels of C-reactive protein, hemoglobin, and hematocrit. Toxicities exhibited a progressive pattern over time, encompassing oral mucositis (OM), xerostomia, dysphagia, dysgeusia, trismus, and radiodermatitis. No correlation between comorbidities and cancer treatment-related toxicities, a positive correlation between medications in use and OM, and a negative correlation between medications and dysgeusia were found. OM was associated with altered thyroxine (T4) and free thyroxine (FT4), calcium, urea, creatinine, alkaline phosphatase, and syphilis. Family income and housing were OM predictors. Altered T4/FT4/urea/calcium/alkaline phosphatase/creatinine/syphilis may be useful clinical predictors of OM. Despite the elevated prevalence of comorbidities and abnormal laboratory findings, dental treatment prior to cancer treatment yielded no adverse events.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:像ChatGPT这样的大型语言模型以其理解和生成文本内容的能力彻底改变了自然语言处理领域,显示出在医学教育中发挥作用的巨大潜力。本研究旨在定量评估和综合分析ChatGPT在中国三种类型的国家体检中的表现。包括国家医学执照考试(NMLE),国家药师执业资格考试(NPLE),全国护士执业资格考试(NNLE)。
    方法:我们从中国NMLE收集问题,NPLE和NNLE从2017年到2021年。在NMLE和NPLE中,每个考试由4个单元组成,而在NNLE,每个考试由2个单元组成。带有数字的问题,表或化学结构由临床医生手动鉴定并排除.我们通过多个提示应用直接指导策略,以迫使ChatGPT生成清晰的答案,并能够区分单选题和多选题。
    结果:ChatGPT在五年内的三种检查中的任何一种都未能通过0.6的准确性阈值。具体来说,在NMLE中,记录的最高准确度为0.5467,在2018年和2021年均达到。在NPLE,2017年最高精度为0.5599。在NNLE,最令人印象深刻的结果是在2017年,准确率为0.5897,这也是我们整个评估中最高的准确率。ChatGPT的性能在不同单位中没有显着差异,但不同题型存在显著差异。ChatGPT在一系列学科领域表现良好,包括临床流行病学,人类寄生虫学,和皮肤病学,以及分子等各种医学主题,健康管理和预防,诊断和筛查。
    结论:这些结果表明ChatGPT未能通过NMLE,NPLE和NNLE在中国,从2017年到2021年。但是大型语言模型在医学教育中显示出巨大的潜力。将来,将需要高质量的医疗数据来提高性能。
    BACKGROUND: Large language models like ChatGPT have revolutionized the field of natural language processing with their capability to comprehend and generate textual content, showing great potential to play a role in medical education. This study aimed to quantitatively evaluate and comprehensively analysis the performance of ChatGPT on three types of national medical examinations in China, including National Medical Licensing Examination (NMLE), National Pharmacist Licensing Examination (NPLE), and National Nurse Licensing Examination (NNLE).
    METHODS: We collected questions from Chinese NMLE, NPLE and NNLE from year 2017 to 2021. In NMLE and NPLE, each exam consists of 4 units, while in NNLE, each exam consists of 2 units. The questions with figures, tables or chemical structure were manually identified and excluded by clinician. We applied direct instruction strategy via multiple prompts to force ChatGPT to generate the clear answer with the capability to distinguish between single-choice and multiple-choice questions.
    RESULTS: ChatGPT failed to pass the accuracy threshold of 0.6 in any of the three types of examinations over the five years. Specifically, in the NMLE, the highest recorded accuracy was 0.5467, which was attained in both 2018 and 2021. In the NPLE, the highest accuracy was 0.5599 in 2017. In the NNLE, the most impressive result was shown in 2017, with an accuracy of 0.5897, which is also the highest accuracy in our entire evaluation. ChatGPT\'s performance showed no significant difference in different units, but significant difference in different question types. ChatGPT performed well in a range of subject areas, including clinical epidemiology, human parasitology, and dermatology, as well as in various medical topics such as molecules, health management and prevention, diagnosis and screening.
    CONCLUSIONS: These results indicate ChatGPT failed the NMLE, NPLE and NNLE in China, spanning from year 2017 to 2021. but show great potential of large language models in medical education. In the future high-quality medical data will be required to improve the performance.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:ChatGPT是最流行的大型语言模型(LLM)之一,在各种标准化测试中表现出熟练程度,包括多项选择的医学委员会检查。然而,其在耳鼻咽喉头颈外科(OHNS)认证考试和开放式医疗委员会认证考试中的表现尚未报告。
    目的:我们旨在评估ChatGPT在OHNS板考试中的表现,并提出一种新颖的方法来评估AI模型在开放式医学板考试问题上的表现。
    方法:在2023年4月11日,加拿大皇家内科医生和外科医生学院的样本检查中采用了21个开放式问题来查询ChatGPT,有提示和无提示。一个新的模型,名为和谐,有效性,安全,能力(CVSC),是为了评估其性能而开发的。
    结果:在开放式问题评估中,ChatGPT在尝试中获得了通过分数(在3次试验中平均为75%),并在提示下表现出更高的准确性。该模型具有较高的一致性(92.06%)和令人满意的有效性。虽然在重新生成答案方面表现出相当大的一致性,它通常只提供部分正确的回答。值得注意的是,有关的特征,如幻觉和自我冲突的答案被观察。
    结论:ChatGPT在样本检查中取得了及格分数,并证明了通过加拿大皇家内科医生和外科医生学院的OHNS认证考试的潜力。由于它的幻觉,仍然存在一些担忧,这可能会给患者安全带来风险。需要进一步调整,以便为临床实施提供更安全,更准确的答案。
    BACKGROUND: ChatGPT is among the most popular large language models (LLMs), exhibiting proficiency in various standardized tests, including multiple-choice medical board examinations. However, its performance on otolaryngology-head and neck surgery (OHNS) certification examinations and open-ended medical board certification examinations has not been reported.
    OBJECTIVE: We aimed to evaluate the performance of ChatGPT on OHNS board examinations and propose a novel method to assess an AI model\'s performance on open-ended medical board examination questions.
    METHODS: Twenty-one open-ended questions were adopted from the Royal College of Physicians and Surgeons of Canada\'s sample examination to query ChatGPT on April 11, 2023, with and without prompts. A new model, named Concordance, Validity, Safety, Competency (CVSC), was developed to evaluate its performance.
    RESULTS: In an open-ended question assessment, ChatGPT achieved a passing mark (an average of 75% across 3 trials) in the attempts and demonstrated higher accuracy with prompts. The model demonstrated high concordance (92.06%) and satisfactory validity. While demonstrating considerable consistency in regenerating answers, it often provided only partially correct responses. Notably, concerning features such as hallucinations and self-conflicting answers were observed.
    CONCLUSIONS: ChatGPT achieved a passing score in the sample examination and demonstrated the potential to pass the OHNS certification examination of the Royal College of Physicians and Surgeons of Canada. Some concerns remain due to its hallucinations, which could pose risks to patient safety. Further adjustments are necessary to yield safer and more accurate answers for clinical implementation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:ChatGPT由于其在生成广泛的信息和即时检索任何类型的数据方面的高性能,最近引起了全球关注。ChatGPT还通过了美国医学执照考试(USMLE)的测试,并成功清除了它。因此,它在医学教育中的可用性现在是全球范围内的关键讨论之一。
    目的:本研究的目的是使用临床案例插图评估ChatGPT在医学生物化学中的表现。
    方法:使用10个临床病例小插曲在医学生物化学中评估了ChatGPT的性能。随机选择临床病例插图,并与反应选项一起输入ChatGPT。我们测试了每个临床病例的反应两次。ChatGPT生成的答案被保存并使用我们的参考资料进行检查。
    结果:ChatGPT在第一次尝试时就产生了4个问题的正确答案。对于其他情况,ChatGPT在第一次和第二次尝试中产生的应答存在差异.在第二次尝试中,ChatGPT为使用的10个案例中的6个问题提供了正确答案,而4个问题提供了错误答案。但是,令我们惊讶的是,对于病例3,通过多次尝试获得了不同的答案.我们认为这是由于案件的复杂性而发生的,其中涉及以平衡的方法解决与氨基酸代谢相关的各种关键医学方面。
    结论:根据我们的研究结果,ChatGPT可能不被视为用于医学教育以改善学习和评估的准确信息提供者。然而,我们的研究受到样本量小(10例临床病例小插曲)和使用公开版本的ChatGPT(3.5版)的限制.尽管人工智能(AI)有能力改变医学教育,我们强调,在实际实施之前,要验证由此类AI系统产生的此类数据的正确性和可靠性。
    BACKGROUND: ChatGPT has gained global attention recently owing to its high performance in generating a wide range of information and retrieving any kind of data instantaneously. ChatGPT has also been tested for the United States Medical Licensing Examination (USMLE) and has successfully cleared it. Thus, its usability in medical education is now one of the key discussions worldwide.
    OBJECTIVE: The objective of this study is to evaluate the performance of ChatGPT in medical biochemistry using clinical case vignettes.
    METHODS: The performance of ChatGPT was evaluated in medical biochemistry using 10 clinical case vignettes. Clinical case vignettes were randomly selected and inputted in ChatGPT along with the response options. We tested the responses for each clinical case twice. The answers generated by ChatGPT were saved and checked using our reference material.
    RESULTS: ChatGPT generated correct answers for 4 questions on the first attempt. For the other cases, there were differences in responses generated by ChatGPT in the first and second attempts. In the second attempt, ChatGPT provided correct answers for 6 questions and incorrect answers for 4 questions out of the 10 cases that were used. But, to our surprise, for case 3, different answers were obtained with multiple attempts. We believe this to have happened owing to the complexity of the case, which involved addressing various critical medical aspects related to amino acid metabolism in a balanced approach.
    CONCLUSIONS: According to the findings of our study, ChatGPT may not be considered an accurate information provider for application in medical education to improve learning and assessment. However, our study was limited by a small sample size (10 clinical case vignettes) and the use of the publicly available version of ChatGPT (version 3.5). Although artificial intelligence (AI) has the capability to transform medical education, we emphasize the validation of such data produced by such AI systems for correctness and dependability before it could be implemented in practice.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:ChatGPT在国家医疗许可考试中表现出令人印象深刻的表现,例如美国医学执照考试(USMLE),甚至以专家级的表现通过它。然而,缺乏对其在低收入国家的国家许可医学检查中的表现的研究。在秘鲁,几乎三分之一的考生没有通过国家执照体检,ChatGPT具有增强医学教育的潜力。
    目的:我们的目的是评估使用GPT-3.5和GPT-4的ChatGPT在秘鲁国家许可医学检验(ExamenNacionaldeMedicina[ENAM])中的准确性。此外,我们试图确定与ChatGPT提供的错误答案相关的因素.
    方法:我们使用了ENAM2022数据集,其中包括180道多项选择题,来评估ChatGPT的性能。使用了各种提示,并对准确性进行了评估。将ChatGPT的表现与1025名受检者的样本进行了比较。问题类型等因素,秘鲁特有的知识,歧视,困难,质量问题,和受试者进行了分析,以确定他们对错误答案的影响。收到错误答案的问题经历了涉及不同提示的三步过程,以探索添加角色和上下文对ChatGPT准确性的潜在影响。
    结果:GPT-4在ENAM上达到了86%的准确性,其次是GPT-3.5,占77%。1025名考生获得的准确率为55%。GPT-3.5和GPT-4之间存在相当的一致性(κ=0.38)。中高难度问题与GPT-3.5(比值比[OR]6.6,95%CI2.73-15.95)和GPT-4(OR33.23,95%CI4.3-257.12)的粗略和调整模型中的错误答案相关。在重新输入收到错误答案的问题后,GPT-3.5从41(100%)到12(29%)不正确的答案,和GPT-4从25(100%)到4(16%)。
    结论:我们的研究发现,ChatGPT(GPT-3.5和GPT-4)可以在ENAM上实现专家级性能,比我们的大多数考生都好。我们发现GPT-3.5和GPT-4之间相当一致。不正确的答案与问题的难度有关,这可能类似于人类的表现。此外,通过用包含其他角色和上下文的不同提示重新输入最初收到错误答案的问题,ChatGPT实现了更高的准确性。
    BACKGROUND: ChatGPT has shown impressive performance in national medical licensing examinations, such as the United States Medical Licensing Examination (USMLE), even passing it with expert-level performance. However, there is a lack of research on its performance in low-income countries\' national licensing medical examinations. In Peru, where almost one out of three examinees fails the national licensing medical examination, ChatGPT has the potential to enhance medical education.
    OBJECTIVE: We aimed to assess the accuracy of ChatGPT using GPT-3.5 and GPT-4 on the Peruvian National Licensing Medical Examination (Examen Nacional de Medicina [ENAM]). Additionally, we sought to identify factors associated with incorrect answers provided by ChatGPT.
    METHODS: We used the ENAM 2022 data set, which consisted of 180 multiple-choice questions, to evaluate the performance of ChatGPT. Various prompts were used, and accuracy was evaluated. The performance of ChatGPT was compared to that of a sample of 1025 examinees. Factors such as question type, Peruvian-specific knowledge, discrimination, difficulty, quality of questions, and subject were analyzed to determine their influence on incorrect answers. Questions that received incorrect answers underwent a three-step process involving different prompts to explore the potential impact of adding roles and context on ChatGPT\'s accuracy.
    RESULTS: GPT-4 achieved an accuracy of 86% on the ENAM, followed by GPT-3.5 with 77%. The accuracy obtained by the 1025 examinees was 55%. There was a fair agreement (κ=0.38) between GPT-3.5 and GPT-4. Moderate-to-high-difficulty questions were associated with incorrect answers in the crude and adjusted model for GPT-3.5 (odds ratio [OR] 6.6, 95% CI 2.73-15.95) and GPT-4 (OR 33.23, 95% CI 4.3-257.12). After reinputting questions that received incorrect answers, GPT-3.5 went from 41 (100%) to 12 (29%) incorrect answers, and GPT-4 from 25 (100%) to 4 (16%).
    CONCLUSIONS: Our study found that ChatGPT (GPT-3.5 and GPT-4) can achieve expert-level performance on the ENAM, outperforming most of our examinees. We found fair agreement between both GPT-3.5 and GPT-4. Incorrect answers were associated with the difficulty of questions, which may resemble human performance. Furthermore, by reinputting questions that initially received incorrect answers with different prompts containing additional roles and context, ChatGPT achieved improved accuracy.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    UNASSIGNED: As the novel coronavirus 2019 (COVID-19) continues its pandemic surge globally along with its social distancing norms, the physical conduction of practical examinations for medical graduates and postgraduates has become difficult. Software-based systems and social media platforms could provide alternatives for ensuring regular medical education and exam-oriented assessments. In this context, we evaluated our own experience with virtual conduction of semester practical exams for medical graduates.
    UNASSIGNED: This prospective study was conducted in Gynaecology and Obstetrics department. We employed live streaming educational video conferencing software for virtual consultation between medical students, patients (case presentations), internal and external examiners. The outcomes were evaluated in terms of conduction of various components of practical examination-Viva, case presentations, instruments, slides, specimen examination. Statistical analysis was performed by descriptive statistics through Microsoft Excel sheet.
    UNASSIGNED: Virtual conduction examination/evaluation was performed on 150 medical students by examiners from a distant location. No problems occurred except few short duration (less than 5 min) interruptions due to internet connectivity issues. 125/150 (83.5%) of medical students and all examiners (2 internal and 2 external) expressed satisfaction with virtual medical evaluation.
    UNASSIGNED: 83.5% of medical students and all examiners expressed satisfaction with virtual medical evaluation during this COVID pandemic. Our findings suggest that virtual conduction of practical annual medical exams through virtual video conferencing platform appears to be an optimal alternative during COVID pandemic.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Case Reports
    BACKGROUND: Medical examination of the adult population is aimed at diagnosing chronic noncommunicable diseases and their risk factors. The unreasonable choice of screening methods and information processing can lead to the unjustified waste of resources, with little benefit or even damage to the health of population and to the distortion of the statistic information.
    OBJECTIVE: To evaluate the quality of medical examinations for chronic noncommunicable diseases among the adult population of the Irkutsk region from 2013 to 2017.
    METHODS: We analyzed the Adult Clinical Examination report N 131 using comparative and statistical methods. It was selected for this study because it provides a summary of the findings from the Chronic Non-Communicable Diseases surveys from 2013 to 2017 and thus precluded unnecessary investment of time and labor. The report comprises sections 1000 to 7000, which provided medical examination data, such as demographic information and statistics on various diseases, including neoplasms.
    RESULTS: The years 2016 and 2017 were notable for the emergence of 567 new cancers, which accounted for 12.9% of total diagnoses. In 2017, there were 115 192 patients with cardiovascular diseases, a fivefold increase from 2013. Among the neurological dysfunctions, 0.9% were ischemia attacks and related syndromes. The remaining 99.1% were not highlighted in the report. The respiratory system diseases were pneumonia, bronchitis, chronic obstructive pulmonary disease (COPD), asthma, asthmatic status, and bronchiectasis. These diseases made up 68.3% of all pathologies of the respiratory system. The remaining 11 327 cases were not classified nosologically.
    CONCLUSIONS: Every section of the N 131 report showed significant inconsistencies among the summary survey results for both the Irkutsk Region and Russia. This could result in a misunderstanding of disease prevalence and, consequently, in improper decision making. At this point, approaches to statistical analysis of health surveys must be reconsidered on a national scale.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    OBJECTIVE: Custody conditions in police cells are often demeaning and considered inappropriate for human beings. The detention of young adolescents in police custody has received little attention. Our study aimed to describe the characteristics of adolescents under 18 detained in custody.
    METHODS: We studied all arrestees aged 13-17 examined in 1 year (January 01-December 31, 2014) in a suburban district near Paris. We evaluated the proportion of adolescents under 18 among all arrestees detained in custody and their medical history, addictive behaviors, perceived health status, and opinion on custody.
    RESULTS: Arrestees aged 13-17 accounted for 1859 individuals. They were predominantly males (94%) and accounted for 19% of all examinations in custody. Nearly half of the arrestees aged 13-15 (42%), and two thirds of those aged 16-17 (65%) had been previously detained in police cells. Somatic and psychiatric disorders were reported by 7% and 4%, respectively, of arrestees aged 13-17. Alcohol, tobacco, and cannabis consumption were reported by 5%, 24%, and 12%, respectively, of arrestees aged 13-15. These proportions were lower than the 16%, 50%, and 35%, respectively, reported by arrestees aged 16-17 (p < 0.0001). Assaults were reported by 18% of arrestees aged 13-17. They had a fair, bad, or very bad opinion on custody in 43% of cases.
    CONCLUSIONS: The detention of adolescents in police stations is commonly associated with assaults at the time of arrest. High proportions of adolescent arrestees smoke tobacco or cannabis. We suggest that the medical examination in custody could be an opportunity for adolescents to initiate access to health care.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    BACKGROUND: Little information is available regarding the medical status and health care needs of female arrestees. Our objective was to evaluate the perceived health and somatic or psychiatric disorders reported by female arrestees in police cells.
    METHODS: We conducted an observational study in a regional reference department of forensic medicine in France. We studied female arrestees examined in police cells (01/01/2013-06/30/2013). Data were collected regarding individuals\' medical characteristics, addictive behaviours, and perceived health status, as well as reported assaults or recent traumatic injuries. We recorded medical decisions regarding fitness for detention in police cells.
    RESULTS: A total of 438 women (median age, 29; range, 13-67) accounted for 5% of the 7408 examined arrestees. Females considered their overall health as good or very good in 314/395 cases (70%). Women reported chronic somatic or psychiatric disorders more frequently than men (89/379, 23% vs. 757/6,135, 12%, p < 0.001 and 59/379, 15% vs. 392/6319, 6%, p < 0.001, respectively). Daily tobacco consumption and cannabis use were reported by 255/403 (63%) and 98/438 female arrestees (22%), respectively. Physical assaults were reported in 113/415 cases (27%). Female arrestees were considered fit for detention in 92% of cases. Among 24 pregnant arrestees, 6 (25%) were unfit for detention, 2 (8%) were fit for custody during daytime only and 16 (67%) were fit for detention if certain conditions were met.
    CONCLUSIONS: Detention in police custody involves a minority of females. Females are older and report somatic or psychiatric disorders more frequently than males.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

公众号