背景:人工智能(AI)工具旨在使用在线对话界面根据其训练参数创建或生成内容。人工智能为重新定义教师和学习者的角色界限开辟了新的途径,并有可能影响教学过程。
方法:在这项描述性概念验证横断面研究中,我们探索了三种生成AI工具在高血压主题药物治疗中的应用,以生成:(1)特定的学习成果(SLO);(2)测试项目(MCQ-A型和病例聚类;SAQs;OSPE);(3)测试医学生的标准设置参数。
结果:对AI生成的输出的分析显示出深刻的同源性,但在质量和对精炼搜索查询的响应性方面存在差异。SLO确定了与医疗计划阶段相关的抗高血压药理学和治疗的关键领域,根据布鲁姆的分类法,用适当的动作动词表示。测试项目通常具有与搜索查询中所述的关键域对齐的临床插图。一些与A型MCQ相关的测试项目存在施工缺陷,多个正确答案,和可疑的适当的学习者的阶段。ChatGPT为测试项目生成了解释,这增强了支持学习者自学的有用性。综合病例群项目集中了临床病例描述插图,跨学科整合,并瞄准更高水平的能力。人工智能工具对标准设定的反应各不相同。每个SAQ临床方案的个体问题大多是开放式的。AI生成的OSPE测试项目适合学习者的阶段,并确定了相关的药物治疗问题。为SAQs和OSPEs提供的模型答案可以帮助课程教师规划课堂课程,确定合适的教学方法,建立分级规则,为学习者提供学习指导。概述了提高AI生成的测试项目质量的关键经验教训。
结论:人工智能工具是规划教学方法的有用辅助手段,确定测试蓝图的主题,生成测试项目,并指导适合于医学课程学习者阶段的测试标准制定。然而,专家需要审查人工智能生成输出的内容有效性。我们希望AI能够影响医学教育格局,赋予学习者权力,并使能力与课程实施保持一致。人工智能素养是卫生专业人员的一项基本能力。
BACKGROUND: Artificial intelligence (AI) tools are designed to create or generate content from their trained parameters using an online conversational interface. AI has opened new avenues in redefining the role boundaries of teachers and learners and has the potential to impact the teaching-learning process.
METHODS: In this descriptive proof-of- concept cross-sectional study we have explored the application of three generative AI tools on drug treatment of hypertension theme to generate: (1) specific learning outcomes (SLOs); (2) test items (MCQs- A type and
case cluster; SAQs; OSPE); (3) test standard-setting parameters for medical students.
RESULTS: Analysis of AI-generated output showed profound homology but divergence in quality and responsiveness to refining search queries. The SLOs identified key domains of antihypertensive pharmacology and
therapeutics relevant to stages of the medical program, stated with appropriate action verbs as per Bloom\'s taxonomy. Test items often had clinical vignettes aligned with the key domain stated in search queries. Some test items related to A-type MCQs had construction defects, multiple correct answers, and dubious appropriateness to the learner\'s stage. ChatGPT generated explanations for test items, this enhancing usefulness to support self-study by learners. Integrated
case-cluster items had focused clinical
case description vignettes, integration across disciplines, and targeted higher levels of competencies. The response of AI tools on standard-setting varied. Individual questions for each SAQ clinical scenario were mostly open-ended. The AI-generated OSPE test items were appropriate for the learner\'s stage and identified relevant pharmacotherapeutic issues. The model answers supplied for both SAQs and OSPEs can aid course instructors in planning classroom lessons, identifying suitable instructional methods, establishing rubrics for grading, and for learners as a study guide. Key lessons learnt for improving AI-generated test item quality are outlined.
CONCLUSIONS: AI tools are useful adjuncts to plan instructional methods, identify themes for test blueprinting, generate test items, and guide test standard-setting appropriate to learners\' stage in the medical program. However, experts need to review the content validity of AI-generated output. We expect AIs to influence the medical education landscape to empower learners, and to align competencies with curriculum implementation. AI literacy is an essential competency for health professionals.