Licensure, Medical

Licensure,医疗
  • 文章类型: Journal Article
    背景:国家医学执照考试(NMLE)是唯一的目标,标准化指标,以评估医学生是否拥有作为医生所需的专业知识和技能。然而,2021年我院NMLE总体合格率远低于北京协和医院,这需要进一步改进。
    方法:为查找2021年绩效不佳的原因,质量改进小组(QIT)定期组织面对面会议,进行深入讨论和问卷调查,并通过“柏拉图分析”和“头脑风暴法”分析数据。找出原因后,“计划-执行-检查-行动”(PDCA)循环继续识别和解决问题,其中包括通过创建“甘特图”来制定和实施具体的培训计划,效果的检查,以及2021年至2022年的持续改进。有关2021年和2022年学生表现以及出勤率的详细信息,评估,相关部门提供了我院的评价和建议,通过率相关数据是在线收集的。
    结果:在PDCA计划之后,我院NMLE合格率从2021年的80.15%上升至2022年的91.04%,上升10.89%(P=0.0109),技能考试合格率从2021年的95.59%提高到2022年的99.25%(P=0.0581),理论考试合格率从2021年的84.5%提高到2022年的93.13%(P=0.027)。此外,随着理论考试成绩从2021年的377.0±98.76分增加到2022年的407.6±71.94分,所有考生的平均成绩均增加(P=0.004).
    结论:我们的结果表明,PDCA计划在我院2022年成功应用,提高了NMLE的通过率,PDCA计划可能为未来的医学教育提供一个实用的框架,并在明年进一步提高NMLE的通过率。
    BACKGROUND: The National Medical Licensing Examination (NMLE) is the only objective, standardized metric to evaluate whether a medical student possessing the professional knowledge and skills necessary to work as a physician. However, the overall pass rate of NMLE in our hospital in 2021 was much lower than that of Peking Union Medical College Hospital, which was required to be further improved.
    METHODS: To find the reasons for the unsatisfactory performance in 2021, the quality improvement team (QIT) organized regular face-to-face meetings for in-depth discussion and questionnaire, and analyzed the data by \"Plato analysis\" and \"Brainstorming method\". After finding out the reasons, the \"Plan-Do-Check-Action\" (PDCA) cycle was continued to identify and solve problems, which included the formulation and implementation of specific training plans by creating the \"Gantt charts\", the check of effects, and continuous improvements from 2021 to 2022. Detailed information about the performance of students in 2021 and 2022, and the attendance, assessment, evaluation and suggestions from our hospital were provided by the relevant departments, and the pass rate-associated data was collected online.
    RESULTS: After the PDCA plan, the pass rate of NMLE in our hospital increased by 10.89% from 80.15% in 2021 to 91.04% in 2022 (P = 0.0109), with the pass rate of skill examination from 95.59% in 2021 to 99.25% in 2022 (P = 0.0581) and theoretical examination from 84.5% in 2021 to 93.13% in 2022 (P = 0.027). Additionally, the mean scores of all examinees increased with the theoretical examination score increasing from 377.0 ± 98.76 in 2021 to 407.6 ± 71.94 in 2022 (P = 0.004).
    CONCLUSIONS: Our results showed a success application of the PDCA plan in our hospital which improved the pass rate of the NMLE in 2022, and the PDCA plan may provide a practical framework for future medical education and further improve the pass rate of NMLE in the next year.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目标:在美国获得医学博士学位(MD)的学生必须通过美国医学执照考试(USMLE)步骤1。寻求第一步住宿的残疾学生的申请过程可能很艰巨,障碍缠身,并且可能会带来巨大的负担,可能会产生长期的影响。我们试图了解申请第1步住宿的1型糖尿病(T1D)医学生的经验。
    方法:对在医学教育联络委员会(LCME)认可的MD计划中注册的学生进行了一项Qualtrics调查,这些学生被诊断为T1D。进行了基本计数和定性归纳分析。
    结果:在发送的21项调查中,16名(76.2%)参与者回答。在16名受访者中,11人(68.8%)申请USMLEStep-1住宿,而5人(31.2%)没有。在申请住宿的11人中,7人(63.6%)收到了要求的住宿,而4人(36.4%)没有。在那些收到要求的住宿的人中,5/7(71.4%)在检查当天经历了至少一个与糖尿病相关的障碍。在那些没有申请第一步住宿的人中,4/5(80%)的参与者报告在检查当天经历了至少一个与糖尿病相关的障碍。总的来说,11/16(68.8%)的学生在考试当天有或没有住宿的障碍。定性分析揭示了参与者对这个过程的体验的主题:挫折,愤怒,压力,和一些普遍满意的领域。
    结论:本研究报告了T1D学生对步骤1适应申请过程中的障碍和不平等的看法。有和没有住宿的学生在测试日遇到了与T1D相关的障碍。
    OBJECTIVE: Students who earn their medical doctorate (MD) in the U.S. must pass the United States Medical Licensing Exam (USMLE) Step-1. The application process for students with disabilities who seek Step-1 accommodations can be arduous, barrier-ridden, and can impose a significant burden that may have long-lasting effects. We sought to understand the experiences of medical students with Type-1 Diabetes (T1D) who applied for Step-1 accommodations.
    METHODS: A Qualtrics survey was administered to students enrolled in Liaison Committee on Medical Education (LCME)-accredited MD programs who disclosed having a primary diagnosis of T1D. Basic counts and qualitative inductive analyses were conducted.
    RESULTS: Of the 21 surveys sent, 16 (76.2%) participants responded. Of the 16 respondents, 11 (68.8%) applied for USMLE Step-1 accommodations, whereas 5 (31.2%) did not. Of the 11 who applied for accommodations, 7 (63.6%) received the accommodations requested, while 4 (36.4%) did not. Of those who received the accommodations requested, 5/7 (71.4%) experienced at least one diabetes-related barrier on exam day. Of those who did not apply for Step-1 accommodations, 4/5 (80%) participants reported experiencing at least one diabetes-related barrier on exam day. Overall, 11/16 (68.8%) students experienced barriers on exam day with or without accommodations. Qualitative analysis revealed themes among participants about their experience with the process: frustration, anger, stress, and some areas of general satisfaction.
    CONCLUSIONS: This study reports the perceptions of students with T1D about barriers and inequities in the Step-1 accommodations application process. Students with and without accommodations encountered T1D-related obstacles on test day.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    ChatGPT作为一个多方面的AI聊天机器人,在医学上具有潜在的应用,已经引起了人们的关注。尽管在临床管理和患者教育等领域有有趣的初步发现,在全面了解ChatGPT能力的机会和局限性方面,仍然存在很大的知识差距,尤其是在医学考试和教育方面。从Amboss题库中提取了总共n=2,729个USMLE步骤1练习题。排除352个基于图像的问题后,总共2,377个基于文本的问题被进一步分类并手动输入到ChatGPT中,并记录了它的反应。ChatGPT的整体性能进行了分析,基于问题的难度,类别,以及关于特定信号单词和短语的内容。ChatGPT在从Amboss在线题库获得的n=2,377USMLE步骤1准备问题的总数中,总体准确率为55.8%。它证明了问题难度和性能之间的显着负相关,rs=-0.306;p<0.001,在不同级别的问题难度中保持与人类用户同伴组相当的准确性。值得注意的是,ChatGPT在血清学相关问题中表现优于(61.1%与53.8%;p=0.005),但与ECG相关的内容(42.9%vs.55.6%;p=0.021)。ChatGPT在病理生理学相关问题茎中取得了统计学上显着的较差表现。(信号短语=“什么是最可能/可能的原因”)。ChatGPT在各种问题类别和难度级别上表现一致。这些发现强调需要进一步调查以探索ChatGPT在医学检查和教育中的潜力和局限性。
    ChatGPT has garnered attention as a multifaceted AI chatbot with potential applications in medicine. Despite intriguing preliminary findings in areas such as clinical management and patient education, there remains a substantial knowledge gap in comprehensively understanding the chances and limitations of ChatGPT\'s capabilities, especially in medical test-taking and education. A total of n = 2,729 USMLE Step 1 practice questions were extracted from the Amboss question bank. After excluding 352 image-based questions, a total of 2,377 text-based questions were further categorized and entered manually into ChatGPT, and its responses were recorded. ChatGPT\'s overall performance was analyzed based on question difficulty, category, and content with regards to specific signal words and phrases. ChatGPT achieved an overall accuracy rate of 55.8% in a total number of n = 2,377 USMLE Step 1 preparation questions obtained from the Amboss online question bank. It demonstrated a significant inverse correlation between question difficulty and performance with rs = -0.306; p < 0.001, maintaining comparable accuracy to the human user peer group across different levels of question difficulty. Notably, ChatGPT outperformed in serology-related questions (61.1% vs. 53.8%; p = 0.005) but struggled with ECG-related content (42.9% vs. 55.6%; p = 0.021). ChatGPT achieved statistically significant worse performances in pathophysiology-related question stems. (Signal phrase = \"what is the most likely/probable cause\"). ChatGPT performed consistent across various question categories and difficulty levels. These findings emphasize the need for further investigations to explore the potential and limitations of ChatGPT in medical examination and education.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: English Abstract
    BACKGROUND: The learning objectives in the current cross-sectional subject \"Rehabilitation, Physical Medicine, Naturopathic Medicine\" have been revised as part of the further development of the National Competency-Based Catalogue of Learning Objectives for Medicine (NKLM) to its new version 2.0. Since the NKLM is designed as an interdisciplinary catalogue, a subject assignment seemed necessary from the point of view of various stakeholders. Thus, the German Association of Scientific Medical Societies (AWMF) and the German medical faculties initiated a subject assignment process. The assignment process for the subject \"Physical and Rehabilitative Medicine, Naturopathic Medicine\" (PRM-NHV; according to the subject list of the first draft of the planned novel medical license regulations from 2020) is presented in this paper.
    METHODS: The AWMF invited its member societies to participate in the assignment of learning objectives of chapters VI, VII, and VIII of the NKLM 2.0 to the individual subjects to which they consider to contribute in teaching. For \"PRM-NHV\", representatives of the societies for rehabilitation sciences (DGRW), physical and rehabilitation medicine (DGPRM), orthopaedics and traumatology (DGOU), as well as for naturopathy (DGNHK) participated. In a structured consensus process according to the DELPHI methodology, the learning objectives were selected and consented. Subsequently, subject recommendations were made by the AWMF for each learning objective.
    RESULTS: From the NKLM 2.0, a total of 100 competency-based learning objectives of chapters VII and VIII for the subject \"PRM-NHV\" were consented by the representatives of the involved societies for presentation on the NKLM 2.0 online platform.
    CONCLUSIONS: In the context of the revision process of medical studies in Germany and under the umbrella of the AWMF and the German medical faculties, a broad consensus of competency-based learning objectives in the subject \"PRM-NHV\" could be achieved. This provides an important orientation for all medical faculties both for the further development of teaching in the cross-sectional subject \"Rehabilitation, Physical Medicine, Naturopathic Medicine\" according to the 9th revision of the medical license regulations, which has been valid for twenty years, and for the preparation of the corresponding subjects in the draft bill of the novel license regulations.
    UNASSIGNED: Im Rahmen der Weiterentwicklung des Nationalen Kompetenzbasierten Lernzielkatalogs (NKLM) zur Version 2.0 erfolgte auch eine Überarbeitung der Lernziele im bisherigen Querschnittsfach „Rehabilitation, Physikalische Medizin, Naturheilverfahren“. Da der NKLM grundsätzlich fächerübergreifend angelegt ist, von verschiedenen Seiten aber eine Fächerzuordnung notwendig schien, initiierten die Arbeitsgemeinschaft wissenschaftlich-medizinischer Fachgesellschaften (AWMF) und der Medizinische Fakultätentag einen Fächerzuordnungsprozess. Der Zuordnungsprozess für das Fach „Physikalische und Rehabilitative Medizin, Naturheilverfahren“ (PRM-NHV; laut Fächerliste des ersten Referentenentwurfes der geplanten Approbationsordnung aus dem Jahr 2020) wird in dieser Arbeit dargestellt.
    UNASSIGNED: Die AWMF lud ihre Mitgliedsgesellschaften ein, sich an der Zuordnung von Lernzielen der Kapitel VI, VII und VIII des NKLM 2.0 zu den einzelnen Fächern zu beteiligen, zu denen sie nach eigener Einschätzung einen Beitrag in der Lehre leisten. Für „PRM-NHV“ beteiligten sich Vertreter*innen der DGRW, DGPRM, DGOU sowie der DGNHK. In einem strukturierten Konsensprozess nach der DELPHI-Methodik erfolgten Auswahl und Konsentierung der Lernziele. Anschließend erfolgte durch die AWMF eine Fächerempfehlung.
    UNASSIGNED: Aus dem NKLM 2.0 wurden insgesamt 100 kompetenzbasierte Lernziele der Kapitel VII und VIII für das Fach „PRM-NHV“ von den Vertreter*innen der beteiligten Fachgesellschaften zur Darstellung auf der NKLM 2.0-Online-Plattform konsentiert.
    UNASSIGNED: Im Rahmen des grundlegend geplanten Reformprozesses des Medizinstudiums und unter dem Dach der AWMF und des Medizinischen Fakultätentags gelang die breite Konsentierung von Lernzielen des NKLM 2.0 für die kompetenzbasierte Lehre im Fach „PRM-NHV“ durch die beteiligten Fachgesellschaften. Damit wird eine für alle medizinischen Fakultäten wichtige Orientierung sowohl für die Weiterentwicklung der Lehre im Querschnittsbereich „Rehabilitation, Physikalische Medizin, Naturheilverfahren“ gemäß der seit zwanzig Jahren gültigen 9. Revision der ÄApprO als auch für die Vorbereitung der entsprechenden Fächer im Referentenentwurf der neuen ÄApprO gegeben.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:这项研究评估了美国医学执照考试题库中有关院外出生的内容,以及这些问题是否与当前证据相符。
    方法:在三个题库中搜索有关院外分娩的关键词。然后使用主题分析来分析结果。
    结果:确定了47个问题,其中,55%表示缺乏不足,limited,或问题中的不规则产前护理。
    结论:不存在将院外分娩与医院分娩的产前护理进行比较的系统研究,导致偏见和不良后果的可能性。建议对准确描绘当前证据的问句进行调整。
    OBJECTIVE: This study assessed the content of US Medical Licensing Examination question banks with regard to out-of-hospital births and whether the questions aligned with current evidence.
    METHODS: Three question banks were searched for key words regarding out-of-hospital births. A thematic analysis was then utilized to analyze the results.
    RESULTS: Forty-seven questions were identified, and of these, 55% indicated a lack of inadequate, limited, or irregular prenatal care in the question stem.
    CONCLUSIONS: Systematic studies comparing prenatal care in out-of-hospital births versus hospital births are nonexistent, leading to the potential for bias and adverse outcomes. Adjustments to question stems that accurately portray current evidence are recommended.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    暂无摘要。
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • DOI:
    文章类型: Journal Article
    背景:两名犹太医学生在纳粹政权崛起后被迫停止学习,返回/移民到巴勒斯坦,并在巴勒斯坦实习。第三个学生,尽管面临许多程序限制,能够继续他在柏林的大部分学习,包括通过MD考试。前两个学生回来了,过了几年,到柏林参加医生考试,这使他们能够在巴勒斯坦获得永久医疗执照。我们描述了3名学生的不同背景,这使他们能够在纳粹政权期间在柏林的医学院进行考试。三者的跟进,在英国授权期间和新的以色列国成立的头几年,揭示了光荣的医疗生涯。论文由柏林学院的三位主要教授签署和支持。其中两个被发现具有民族社会主义背景。
    BACKGROUND: Two Jewish medical students who were forced to discontinue their study upon the raise of the Nazi regime, returned/ immigrated to Palestine and did their internship in Palestine. A third student, although faced with many procedural limitations, was able to continue most of his studies in Berlin including passing the MD examination. The first two students returned, after some years, to Berlin to sit for the Doctor examination which enabled them to gain a permanent medical license in Palestine. We describe the different backgrounds of the 3 students which enabled them to do the examination at Berlin\'s medical faculty during the Nazi regime. The follow up of the three, revealed glorious medical career during the British mandate and during the first years of the new state of Israel. The Dissertations were signed and supported by three leading Professors of the Berlin\'s Faculty. Two of them were found to have a National-Socialistic background.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:评估学生的学习策略可以增强学术支持。很少有研究调查男女学生之间学习策略的差异,以及它们对美国医学执照考试(USMLE)步骤1和临床前表现的影响。
    方法:对2019-2024年的班级(女性(n=350)和男性(n=262)进行了学习和学习策略清单(LASSI)。学生在临床前第一年(M1)课程中的表现,临床前第二年(M2)课程,并且记录USMLE步骤1。独立的t检验评估了每个LASSI量表上女性和男性之间的差异。Pearson乘积矩相关性确定了哪些LASSI量表与临床前表现和USMLE步骤1检查相关。
    结果:在10个LASSI量表中,焦虑,注意,信息处理,选择主要想法,测试策略和使用学术资源显示性别之间存在显着差异。女性报告焦虑水平较高(p<0.001),这极大地影响了他们的表现。虽然男性和女性在浓度方面得分相似,动机,时间管理,这些量表是女性表现变异的重要预测因子。测试策略是所有学生表现差异的最大贡献者,不分性别。
    结论:学习中的性别差异影响STEP1成绩。考虑到这项研究的结果将允许有针对性的干预学术成功。
    BACKGROUND: Evaluation of students\' learning strategies can enhance academic support. Few studies have investigated differences in learning strategies between male and female students as well as their impact on United States Medical Licensing Examination® (USMLE) Step 1 and preclinical performance.
    METHODS: The Learning and Study Strategies Inventory (LASSI) was administered to the classes of 2019-2024 (female (n = 350) and male (n = 262)). Students\' performance on preclinical first-year (M1) courses, preclinical second-year (M2) courses, and USMLE Step 1 was recorded. An independent t-test evaluated differences between females and males on each LASSI scale. A Pearson product moment correlation determined which LASSI scales correlated with preclinical performance and USMLE Step 1 examinations.
    RESULTS: Of the 10 LASSI scales, Anxiety, Attention, Information Processing, Selecting Main Idea, Test Strategies and Using Academic Resources showed significant differences between genders. Females reported higher levels of Anxiety (p < 0.001), which significantly influenced their performance. While males and females scored similarly in Concentration, Motivation, and Time Management, these scales were significant predictors of performance variation in females. Test Strategies was the largest contributor to performance variation for all students, regardless of gender.
    CONCLUSIONS: Gender differences in learning influence performance on STEP1. Consideration of this study\'s results will allow for targeted interventions for academic success.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    暂无摘要。
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    正在努力提高医疗保健提供者的时间效率。人工智能工具可以帮助记录和总结医患遭遇,并产生医疗笔记和医疗建议。然而,除了医疗信息,医疗保健和患者之间的讨论包括闲聊和其他与医疗问题无关的信息。由于大型语言模型(LLM)是根据提示中的单词构建其响应的预测模型,闲聊和无关的信息可能会改变给出的回应和建议。因此,这项研究旨在调查医疗数据与闲聊混合对ChatGPT提供的医疗建议准确性的影响。USMLE第3步问题被用作相关医学数据的模型。我们使用多项选择和开放式问题。首先,我们使用MechanicalTurk平台收集了人类参与者的小谈话句子。第二,两组USLME问题都以一种模式排列,其中原始问题的每个句子后面都是一个闲聊句子。ChatGPT3.5和4被要求在有和没有小谈话句子的情况下回答两组问题。最后,董事会认证的医生分析了ChatGPT的答案,并将其与正式的正确答案进行了比较。分析结果表明,当将闲聊添加到医疗数据中时,ChatGPT-3.5正确回答的能力受到损害(66.8%vs.56.6%;p=0.025)。具体来说,对于多项选择题(72.1%与68.9%;p=0.67),对于开放性问题(61.5%与44.3%;p=0.01),分别。相比之下,在这两种类型的问题中,闲聊短语都没有损害ChatGPT-4的能力(83.6%和66.2%,分别)。根据这些结果,ChatGPT-4似乎比早期的3.5版本更准确,看来闲聊并没有削弱其提供医疗建议的能力。我们的结果是理解利用ChatGPT和其他LLM进行医患互动的潜力和局限性的重要第一步,其中包括随意的谈话。
    Efforts are being made to improve the time effectiveness of healthcare providers. Artificial intelligence tools can help transcript and summarize physician-patient encounters and produce medical notes and medical recommendations. However, in addition to medical information, discussion between healthcare and patients includes small talk and other information irrelevant to medical concerns. As Large Language Models (LLMs) are predictive models building their response based on the words in the prompts, there is a risk that small talk and irrelevant information may alter the response and the suggestion given. Therefore, this study aims to investigate the impact of medical data mixed with small talk on the accuracy of medical advice provided by ChatGPT. USMLE step 3 questions were used as a model for relevant medical data. We use both multiple-choice and open-ended questions. First, we gathered small talk sentences from human participants using the Mechanical Turk platform. Second, both sets of USLME questions were arranged in a pattern where each sentence from the original questions was followed by a small talk sentence. ChatGPT 3.5 and 4 were asked to answer both sets of questions with and without the small talk sentences. Finally, a board-certified physician analyzed the answers by ChatGPT and compared them to the formal correct answer. The analysis results demonstrate that the ability of ChatGPT-3.5 to answer correctly was impaired when small talk was added to medical data (66.8% vs. 56.6%; p = 0.025). Specifically, for multiple-choice questions (72.1% vs. 68.9%; p = 0.67) and for the open questions (61.5% vs. 44.3%; p = 0.01), respectively. In contrast, small talk phrases did not impair ChatGPT-4 ability in both types of questions (83.6% and 66.2%, respectively). According to these results, ChatGPT-4 seems more accurate than the earlier 3.5 version, and it appears that small talk does not impair its capability to provide medical recommendations. Our results are an important first step in understanding the potential and limitations of utilizing ChatGPT and other LLMs for physician-patient interactions, which include casual conversations.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号