generative artificial intelligence

生成人工智能
  • 文章类型: Journal Article
    这项横断面研究评估了一个生成AI平台,以自动创建精确,适当,以及泌尿外科期刊文章中引人注目的社交媒体(SoMe)帖子。
    从2022年8月至2023年10月收集了泌尿科X(Twitter)档案中排名前三的期刊中的100个SoMe帖子,开发了免费软件GPT工具来自动生成SoMe帖子,其中包括标题摘要,关键发现,相关的表情符号,标签,和DOI链接到文章。三名医生独立评估了GPT生成的帖子,以达到四个准确性和适当性标准。从每个日志中随机选择的5个帖子创建了15个场景。每个场景都包含同一文章的原始和GPT生成的帖子。制定了五个问题来调查职位的可喜性,可共享性,订婚,可理解性,和全面性。然后将配对的帖子随机分配,并通过AmazonMechanicalTurk(AMT)响应者提供给盲目的学术作者和公众,以进行偏好评估。
    自动生成后的中值(IQR)时间为10.2秒(8.5-12.5)。在150个评级GPT产生的帖子中,115(76.6%)符合正确性四:144(96%)准确总结了标题,147人(98%)准确地展示了文章的主要发现,131(87.3%)适当使用表情符号和标签138(92%)。共有258名学术泌尿科医师和493名AMT响应者回答了调查,其中GPT生成的帖子始终优于院士和AMT响应者的原始期刊帖子(P<.05)。
    Generative-AI可以自动从泌尿外科期刊摘要中创建SoMe帖子,这些文章既准确又受学术界和公众的欢迎。
    UNASSIGNED: This cross-sectional study assessed a generative-AI platform to automate the creation of accurate, appropriate, and compelling social-media (SoMe) posts from urological journal articles.
    UNASSIGNED: One hundred SoMe-posts from the top 3 journals in urology X (Twitter) profiles were collected from Aug-2022 to Oct-2023 A freeware GPT-tool was developed to auto-generate SoMe posts, which included title-summarization, key findings, pertinent emojis, hashtags, and DOI links to the article. Three physicians independently evaluated GPT-generated posts for achieving tetrafecta of accuracy and appropriateness criteria. Fifteen scenarios were created from 5 randomly selected posts from each journal. Each scenario contained both the original and the GPT-generated post for the same article. Five questions were formulated to investigate the posts\' likability, shareability, engagement, understandability, and comprehensiveness. The paired posts were then randomized and presented to blinded academic authors and general public through Amazon Mechanical Turk (AMT) responders for preference evaluation.
    UNASSIGNED: Median (IQR) time for post auto-generation was 10.2 seconds (8.5-12.5). Of the 150 rated GPT-generated posts, 115 (76.6%) met the correctness tetrafecta: 144 (96%) accurately summarized the title, 147 (98%) accurately presented the articles\' main findings, 131 (87.3%) appropriately used emojis and hashtags 138 (92%). A total of 258 academic urologists and 493 AMT responders answered the surveys, wherein the GPT-generated posts consistently outperformed the original journals\' posts for both academicians and AMT responders (P < .05).
    UNASSIGNED: Generative-AI can automate the creation of SoMe posts from urology journal abstracts that are both accurate and preferable by the academic community and general public.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:在全球范围内,精神障碍已被列为造成负担的十大常见原因之一。生成人工智能(GAI)已经成为一种有前途和创新的技术进步,在精神卫生保健领域具有巨大的潜力。然而,缺乏专门研究和了解GAI在该领域内的应用前景的研究。
    目的:本综述旨在通过整合相关文献,了解GAI知识的现状,并确定其在心理健康领域的关键用途。
    方法:在包括WebofScience在内的8个知名来源中搜索了记录,PubMed,IEEEXplore,medRxiv,bioRxiv,谷歌学者,2013年至2023年的CNKI和万方数据库。我们的重点是原创,使用GAI技术有益于心理健康的英文或中文出版物进行实证研究。为了进行详尽的搜索,我们还检查了相关文献引用的研究。两名审查人员负责数据选择过程,根据所使用的GAI方法(传统检索和基于规则的技术与先进的GAI技术),对所有提取的数据进行了综合和总结,以进行简短深入的分析。
    结果:在对144篇文章的评论中,44(30.6%)符合详细分析的纳入标准。出现了高级GAI的六个关键用途:精神障碍检测,咨询支持,治疗应用,临床培训,临床决策支持,和目标驱动的优化。先进的GAI系统主要集中在治疗应用(n=19,43%)和咨询支持(n=13,30%),临床培训是最不常见的。大多数研究(n=28,64%)广泛关注心理健康,而特定条件如焦虑(n=1,2%),双相情感障碍(n=2,5%),饮食失调(n=1,2%),创伤后应激障碍(n=2,5%),精神分裂症(n=1,2%)受到的关注有限。尽管普遍使用,ChatGPT在检测精神障碍方面的功效仍然不足.此外,发现了100篇关于传统GAI方法的文章,表明先进的GAI可以增强精神卫生保健的不同领域。
    结论:本研究全面概述了GAI在精神保健中的应用,作为未来研究的宝贵指南,实际应用,以及这一领域的政策制定。虽然GAI在加强精神卫生保健服务方面表现出了希望,其固有的局限性强调了其作为补充工具的作用,而不是替代训练有素的心理健康提供者。有必要对GAI技术进行认真和道德的整合,确保采取平衡的方法,最大限度地提高利益,同时减轻精神卫生保健实践中的潜在挑战。
    BACKGROUND: Mental disorders have ranked among the top 10 prevalent causes of burden on a global scale. Generative artificial intelligence (GAI) has emerged as a promising and innovative technological advancement that has significant potential in the field of mental health care. Nevertheless, there is a scarcity of research dedicated to examining and understanding the application landscape of GAI within this domain.
    OBJECTIVE: This review aims to inform the current state of GAI knowledge and identify its key uses in the mental health domain by consolidating relevant literature.
    METHODS: Records were searched within 8 reputable sources including Web of Science, PubMed, IEEE Xplore, medRxiv, bioRxiv, Google Scholar, CNKI and Wanfang databases between 2013 and 2023. Our focus was on original, empirical research with either English or Chinese publications that use GAI technologies to benefit mental health. For an exhaustive search, we also checked the studies cited by relevant literature. Two reviewers were responsible for the data selection process, and all the extracted data were synthesized and summarized for brief and in-depth analyses depending on the GAI approaches used (traditional retrieval and rule-based techniques vs advanced GAI techniques).
    RESULTS: In this review of 144 articles, 44 (30.6%) met the inclusion criteria for detailed analysis. Six key uses of advanced GAI emerged: mental disorder detection, counseling support, therapeutic application, clinical training, clinical decision-making support, and goal-driven optimization. Advanced GAI systems have been mainly focused on therapeutic applications (n=19, 43%) and counseling support (n=13, 30%), with clinical training being the least common. Most studies (n=28, 64%) focused broadly on mental health, while specific conditions such as anxiety (n=1, 2%), bipolar disorder (n=2, 5%), eating disorders (n=1, 2%), posttraumatic stress disorder (n=2, 5%), and schizophrenia (n=1, 2%) received limited attention. Despite prevalent use, the efficacy of ChatGPT in the detection of mental disorders remains insufficient. In addition, 100 articles on traditional GAI approaches were found, indicating diverse areas where advanced GAI could enhance mental health care.
    CONCLUSIONS: This study provides a comprehensive overview of the use of GAI in mental health care, which serves as a valuable guide for future research, practical applications, and policy development in this domain. While GAI demonstrates promise in augmenting mental health care services, its inherent limitations emphasize its role as a supplementary tool rather than a replacement for trained mental health providers. A conscientious and ethical integration of GAI techniques is necessary, ensuring a balanced approach that maximizes benefits while mitigating potential challenges in mental health care practices.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    生成人工智能(AI)的出现彻底改变了各个领域。在眼科,生成AI有可能提高效率,准确度,临床实践和医学研究的个性化和创新,通过处理数据,精简医疗文件,促进患者与医生的沟通,协助临床决策,模拟临床试验。这篇综述的重点是将生成AI模型开发并集成到临床工作流程和眼科科学研究中。它概述了制定全面评估标准框架的必要性,有力的证据,探索多式联运能力和智能代理的潜力。此外,这篇综述探讨了人工智能模型开发和应用在眼科临床服务和研究中的风险,包括数据隐私,数据偏差,适应摩擦,过度相互依存,和工作替换,在此基础上,我们总结了一个风险管理框架来减轻这些担忧。这篇综述强调了生成人工智能在加强患者护理方面的变革潜力,提高眼科临床服务和研究的运作效率。它还主张对其采用采取平衡的方法。
    The emergence of generative artificial intelligence (AI) has revolutionized various fields. In ophthalmology, generative AI has the potential to enhance efficiency, accuracy, personalization and innovation in clinical practice and medical research, through processing data, streamlining medical documentation, facilitating patient-doctor communication, aiding in clinical decision-making, and simulating clinical trials. This review focuses on the development and integration of generative AI models into clinical workflows and scientific research of ophthalmology. It outlines the need for development of a standard framework for comprehensive assessments, robust evidence, and exploration of the potential of multimodal capabilities and intelligent agents. Additionally, the review addresses the risks in AI model development and application in clinical service and research of ophthalmology, including data privacy, data bias, adaptation friction, over interdependence, and job replacement, based on which we summarized a risk management framework to mitigate these concerns. This review highlights the transformative potential of generative AI in enhancing patient care, improving operational efficiency in the clinical service and research in ophthalmology. It also advocates for a balanced approach to its adoption.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    教医学生获得所需的技能,解释,apply,沟通临床信息是医学教育不可或缺的一部分。此过程的一个关键方面涉及为学生提供有关其自由文本临床笔记质量的反馈。
    本研究的目标是评估大型语言模型ChatGPT3.5的能力,对医学生的自由文本历史和身体笔记进行评分。
    这是一个单一的机构,回顾性研究。标准化的患者学到了预先指定的临床病例,作为病人,与医学生互动。每个学生都写了自由文本历史和他们互动的物理笔记。学生的笔记由标准化患者和ChatGPT使用由85个案例元素组成的预先指定的评分规则进行独立评分。准确度的度量是正确的百分比。
    研究人群由168名一年级医学生组成。总共有14,280分。ChatGPT错误得分率为1.0%,标准化患者错误评分率为7.2%。ChatGPT错误率为86%,低于标准化患者错误率。ChatGPT平均不正确得分为12(SD11)显着低于标准化患者平均不正确得分为85(SD74;P=0.002)。
    与标准化患者相比,ChatGPT显示出较低的错误率。这是第一项评估生成预训练变压器(GPT)计划对医学生的标准化基于患者的免费文本临床笔记进行评分的能力的研究。预计,在不久的将来,大型语言模型将为执业医师提供有关其自由文本注释的实时反馈。GPT人工智能程序代表了医学教育和医学实践的重要进步。
    UNASSIGNED: Teaching medical students the skills required to acquire, interpret, apply, and communicate clinical information is an integral part of medical education. A crucial aspect of this process involves providing students with feedback regarding the quality of their free-text clinical notes.
    UNASSIGNED: The goal of this study was to assess the ability of ChatGPT 3.5, a large language model, to score medical students\' free-text history and physical notes.
    UNASSIGNED: This is a single-institution, retrospective study. Standardized patients learned a prespecified clinical case and, acting as the patient, interacted with medical students. Each student wrote a free-text history and physical note of their interaction. The students\' notes were scored independently by the standardized patients and ChatGPT using a prespecified scoring rubric that consisted of 85 case elements. The measure of accuracy was percent correct.
    UNASSIGNED: The study population consisted of 168 first-year medical students. There was a total of 14,280 scores. The ChatGPT incorrect scoring rate was 1.0%, and the standardized patient incorrect scoring rate was 7.2%. The ChatGPT error rate was 86%, lower than the standardized patient error rate. The ChatGPT mean incorrect scoring rate of 12 (SD 11) was significantly lower than the standardized patient mean incorrect scoring rate of 85 (SD 74; P=.002).
    UNASSIGNED: ChatGPT demonstrated a significantly lower error rate compared to standardized patients. This is the first study to assess the ability of a generative pretrained transformer (GPT) program to score medical students\' standardized patient-based free-text clinical notes. It is expected that, in the near future, large language models will provide real-time feedback to practicing physicians regarding their free-text notes. GPT artificial intelligence programs represent an important advance in medical education and medical practice.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:随着生成人工智能(GenAI)工具的不断发展,需要进行严格的评估,以了解他们相对于经验丰富的临床医生和护士的能力。这项研究的目的是客观地比较ICU护士与各种GenAI模型的诊断准确性和反应格式。对定量结果进行定性解释。
    方法:本形成性研究利用了四种书面临床情景,代表了真实的ICU患者病例来模拟诊断挑战。这些方案是由专家护士制定的,并根据当前文献进行了验证。74名ICU护士参加了基于模拟的评估,其中涉及四种书面临床情景。同时,我们要求ChatGPT-4和Claude-2.0提供相同方案的初步评估和治疗建议.然后,经过认证的ICU护士对ChatGPT-4和Claude-2.0的回答进行了准确性评分,完整性和响应。
    结果:护士在开放式场景中始终实现比AI更高的诊断准确性,尽管某些模型在标准化案例中符合或超过了人类的表现。反应时间也明显不同。出现了定性响应格式差异,例如简洁性与冗长性。不同案例的GenAI模型系统性能的变化突出了泛化性挑战。
    结论:虽然GenAI展示了宝贵的技能,经验丰富的护士在需要整体判断的开放式领域表现优异。在自主临床整合之前,有必要继续发展以加强广义决策能力。响应格式接口应该考虑利用不同的优势。涉及不同利益相关者的严格混合方法研究可以帮助迭代地告知安全,有益的人类与GenAI伙伴关系以经验指导的护理增加为中心。
    结论:这项混合方法模拟研究为优化GenAI和护理知识的协作模型提供了形成性见解,以支持重症监护中的患者评估和决策。这些发现可以帮助指导为重症监护环境量身定制的可解释的GenAI决策支持的开发。
    患者或公众不参与研究的设计和实施或数据的分析和解释。
    BACKGROUND: As generative artificial intelligence (GenAI) tools continue advancing, rigorous evaluations are needed to understand their capabilities relative to experienced clinicians and nurses. The aim of this study was to objectively compare the diagnostic accuracy and response formats of ICU nurses versus various GenAI models, with a qualitative interpretation of the quantitative results.
    METHODS: This formative study utilized four written clinical scenarios representative of real ICU patient cases to simulate diagnostic challenges. The scenarios were developed by expert nurses and underwent validation against current literature. Seventy-four ICU nurses participated in a simulation-based assessment involving four written clinical scenarios. Simultaneously, we asked ChatGPT-4 and Claude-2.0 to provide initial assessments and treatment recommendations for the same scenarios. The responses from ChatGPT-4 and Claude-2.0 were then scored by certified ICU nurses for accuracy, completeness and response.
    RESULTS: Nurses consistently achieved higher diagnostic accuracy than AI across open-ended scenarios, though certain models matched or exceeded human performance on standardized cases. Reaction times also diverged substantially. Qualitative response format differences emerged such as concision versus verbosity. Variations in GenAI models system performance across cases highlighted generalizability challenges.
    CONCLUSIONS: While GenAI demonstrated valuable skills, experienced nurses outperformed in open-ended domains requiring holistic judgement. Continued development to strengthen generalized decision-making abilities is warranted before autonomous clinical integration. Response format interfaces should consider leveraging distinct strengths. Rigorous mixed methods research involving diverse stakeholders can help iteratively inform safe, beneficial human-GenAI partnerships centred on experience-guided care augmentation.
    CONCLUSIONS: This mixed-methods simulation study provides formative insights into optimizing collaborative models of GenAI and nursing knowledge to support patient assessment and decision-making in intensive care. The findings can help guide development of explainable GenAI decision support tailored for critical care environments.
    UNASSIGNED: Patients or public were not involved in the design and implementation of the study or the analysis and interpretation of the data.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    随着它的日益普及,医疗保健专业人员和患者可以使用ChatGPT来获取药物相关信息。进行这项研究是为了评估ChatGPT提供令人满意的反应的能力(即,直接回答问题,准确,完整且相关)对学术药物信息服务提出的与药物相关的问题。将ChatGPT反应与研究人员通过使用传统资源产生的反应进行比较,和参考文献进行了评估。在ChatGPT中输入了39个问题;最常见的三个类别是治疗学(8;21%),复合/制剂(6;15%)和剂量(5;13%)。ChatGPT满意回答了十个(26%)问题。在29个(74%)没有得到满意回答的问题中,缺陷包括缺乏直接反应(11;38%),缺乏准确性(11;38%)和/或缺乏完整性(12;41%)。参考包括八个(29%)的响应;每个都包括伪造的参考。目前,应提醒医疗保健专业人员和消费者不要使用ChatGPT获取药物相关信息.
    With its increasing popularity, healthcare professionals and patients may use ChatGPT to obtain medication-related information. This study was conducted to assess ChatGPT\'s ability to provide satisfactory responses (i.e., directly answers the question, accurate, complete and relevant) to medication-related questions posed to an academic drug information service. ChatGPT responses were compared to responses generated by the investigators through the use of traditional resources, and references were evaluated. Thirty-nine questions were entered into ChatGPT; the three most common categories were therapeutics (8; 21%), compounding/formulation (6; 15%) and dosage (5; 13%). Ten (26%) questions were answered satisfactorily by ChatGPT. Of the 29 (74%) questions that were not answered satisfactorily, deficiencies included lack of a direct response (11; 38%), lack of accuracy (11; 38%) and/or lack of completeness (12; 41%). References were included with eight (29%) responses; each included fabricated references. Presently, healthcare professionals and consumers should be cautioned against using ChatGPT for medication-related information.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:定性方法对于传播和实施新的数字健康干预措施非常有益;但是,当在不断变化的卫生系统中需要来自数据源的及时知识时,这些方法可能是时间密集的,并且会减慢传播速度。生成人工智能(GenAI)及其基础大型语言模型(LLM)的最新进展可能为加快文本数据的定性分析提供了一个有希望的机会。但它们的有效性和可靠性仍然未知。
    目的:我们研究的主要目的是评估主题的一致性,编码的可靠性,以及GenAI之间归纳和演绎主题分析所需的时间(即,ChatGPT和Bard)和人类编码器。
    方法:本研究的定性数据包括40个简短的SMS短信提示提示,这些提示用于数字健康干预中,用于促进使用甲基苯丙胺的HIV感染者的抗逆转录病毒药物依从性。这些SMS文本消息的归纳和演绎主题分析是由2个独立的人类编码团队进行的。一位独立的人类分析师使用ChatGPT和Bard两种方法进行了分析。比较了方法之间主题的一致性(或主题相同的程度)和可靠性(或主题编码的一致性)。
    结果:GenAI(ChatGPT和Bard)产生的主题与人类分析人员在归纳主题分析后确定的主题的71%(5/7)一致。在演绎主题分析程序之后,人类与GenAI之间的主题一致性较低(ChatGPT:6/12,50%;Bard:7/12,58%)。人类编码员和GenAI之间这些一致主题的百分比一致性(或互码可靠性)范围从公平到中等(ChatGPT,感应:31/66,47%;ChatGPT,演绎:22/59,37%;巴德,感应:20/54,37%;巴德,演绎:21/58,36%)。总的来说,就主题的一致性(归纳:6/6,100%;演绎:5/6,83%)和编码的可靠性(归纳:23/62,37%;演绎:22/47,47%)而言,ChatGPT和Bard在两种类型的定性分析中的表现相似。平均而言,进行定性分析时,GenAI所需的总时间明显少于人类编码器(20,SD3.5分钟vs567,SD106.5分钟)。
    结论:人类编码员和GenAI产生的主题具有良好的一致性,这表明这些技术有望减少定性主题分析的资源密集型;然而,它们之间的编码可靠性相对较低,这表明混合方法是必要的。在识别细微差别和解释性主题方面,人类程序员似乎比GenAI更好。未来的研究应该考虑如何与人类程序员合作最好地使用这些强大的技术,以提高混合方法定性研究的效率,同时减轻它们可能带来的潜在道德风险。
    BACKGROUND: Qualitative methods are incredibly beneficial to the dissemination and implementation of new digital health interventions; however, these methods can be time intensive and slow down dissemination when timely knowledge from the data sources is needed in ever-changing health systems. Recent advancements in generative artificial intelligence (GenAI) and their underlying large language models (LLMs) may provide a promising opportunity to expedite the qualitative analysis of textual data, but their efficacy and reliability remain unknown.
    OBJECTIVE: The primary objectives of our study were to evaluate the consistency in themes, reliability of coding, and time needed for inductive and deductive thematic analyses between GenAI (ie, ChatGPT and Bard) and human coders.
    METHODS: The qualitative data for this study consisted of 40 brief SMS text message reminder prompts used in a digital health intervention for promoting antiretroviral medication adherence among people with HIV who use methamphetamine. Inductive and deductive thematic analyses of these SMS text messages were conducted by 2 independent teams of human coders. An independent human analyst conducted analyses following both approaches using ChatGPT and Bard. The consistency in themes (or the extent to which the themes were the same) and reliability (or agreement in coding of themes) between methods were compared.
    RESULTS: The themes generated by GenAI (both ChatGPT and Bard) were consistent with 71% (5/7) of the themes identified by human analysts following inductive thematic analysis. The consistency in themes was lower between humans and GenAI following a deductive thematic analysis procedure (ChatGPT: 6/12, 50%; Bard: 7/12, 58%). The percentage agreement (or intercoder reliability) for these congruent themes between human coders and GenAI ranged from fair to moderate (ChatGPT, inductive: 31/66, 47%; ChatGPT, deductive: 22/59, 37%; Bard, inductive: 20/54, 37%; Bard, deductive: 21/58, 36%). In general, ChatGPT and Bard performed similarly to each other across both types of qualitative analyses in terms of consistency of themes (inductive: 6/6, 100%; deductive: 5/6, 83%) and reliability of coding (inductive: 23/62, 37%; deductive: 22/47, 47%). On average, GenAI required significantly less overall time than human coders when conducting qualitative analysis (20, SD 3.5 min vs 567, SD 106.5 min).
    CONCLUSIONS: The promising consistency in the themes generated by human coders and GenAI suggests that these technologies hold promise in reducing the resource intensiveness of qualitative thematic analysis; however, the relatively lower reliability in coding between them suggests that hybrid approaches are necessary. Human coders appeared to be better than GenAI at identifying nuanced and interpretative themes. Future studies should consider how these powerful technologies can be best used in collaboration with human coders to improve the efficiency of qualitative research in hybrid approaches while also mitigating potential ethical risks that they may pose.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    大型语言模型(LLM)提供高级文本生成功能,有时超越人类的能力。然而,在没有适当专业知识的情况下使用它们会带来重大挑战,特别是在教育环境中。本文探讨了教育领域内自然语言生成(NLG)的不同方面,评估其优缺点,特别是关于LLM。它解决了有关LLM的不透明度和其生成内容的潜在偏见的问题,倡导透明的解决方案。因此,它研究了将OpenLogos专家制作的资源集成到用于释义和翻译的语言生成工具中的可行性。在Multi3GenerationCOSTAction(CA18231)的背景下,我们一直在强调将OpenLogos纳入语言生成过程的重要性,以及在涉及多语言的生成模型中需要明确的指导方针和道德标准,多模态,和多任务处理能力。Multi3Generation计划致力于推进NLG对社会福利的研究,包括其教育应用。它提倡受Logos模型启发的包容性模型,优先考虑透明度,人类控制,语言原则和意义的保存,并承认资源创造者的专业知识。我们设想了一个场景,OpenLogos可以为包容性AI支持的教育做出重大贡献。探讨了与教育中人工智能实施相关的伦理考虑和局限性,强调保持与传统教育原则相一致的平衡方法的重要性。最终,本文主张教育工作者采用创新的工具和方法,以促进语言发展和成长的动态学习环境。
    大型语言模型拥有先进的文本生成质量和功能,往往超过人类。然而,如果没有适当的专业知识或护理,它们也会带来重大挑战。在教育背景下,对语言生成工具及其被学生使用的检查对于建立准则和对其道德使用的共同理解至关重要。本文探讨了教育背景下语言生成的几个方面,并展示了OpenLogos资源的潜在用途,在语言研究中的Multi3GenerationCOSTAction(CA18231)的框架内提供,并将其集成到语言学习工具中,例如释义(单语)和翻译(双语或多语种)。本文强调了在教育中利用OpenLogos的重要性,尤其是在语言学习或语言增强环境中。通过拥抱创新的工具和方法,教育者可以培养有利于语言成长和发展的动态和丰富的学习环境。
    Large Language Models (LLMs) offer advanced text generation capabilities, sometimes surpassing human abilities. However, their use without proper expertise poses significant challenges, particularly in educational contexts. This article explores different facets of natural language generation (NLG) within the educational realm, assessing its advantages and disadvantages, particularly concerning LLMs. It addresses concerns regarding the opacity of LLMs and the potential bias in their generated content, advocating for transparent solutions. Therefore, it examines the feasibility of integrating OpenLogos expert-crafted resources into language generation tools used for paraphrasing and translation. In the context of the Multi3Generation COST Action (CA18231), we have been emphasizing the significance of incorporating OpenLogos into language generation processes, and the need for clear guidelines and ethical standards in generative models involving multilingual, multimodal, and multitasking capabilities. The Multi3Generation initiative strives to progress NLG research for societal welfare, including its educational applications. It promotes inclusive models inspired by the Logos Model, prioritizing transparency, human control, preservation of language principles and meaning, and acknowledgment of the expertise of resource creators. We envision a scenario where OpenLogos can contribute significantly to inclusive AI-supported education. Ethical considerations and limitations related to AI implementation in education are explored, highlighting the importance of maintaining a balanced approach consistent with traditional educational principles. Ultimately, the article advocates for educators to adopt innovative tools and methodologies to foster dynamic learning environments that facilitate linguistic development and growth.
    Large Language Models boast advanced text generation quality and capabilities, often surpassing those of humans. However, they also pose significant challenges when used without proper expertise or care. In an educational context, the examination of language generation tools and their use by students is vital for establishing guidelines and a shared understanding of their ethical usage. This article explores several aspects of language generation within an educational context, and showcases the potential use of OpenLogos resources, provided within the framework of the Multi3Generation COST Action (CA18231) in language study and their integration into language learning tools, such as paraphrasing (monolingual) and translation (bilingual or multilingual). This article emphasizes the importance of leveraging OpenLogos in education, especially in language learning or language enhancement contexts. By embracing innovative tools and methodologies, educators can nurture a dynamic and enriching learning environment conducive to linguistic growth and development.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    心血管疾病(CVD)是全球范围内死亡的主要原因,特别是在资源有限的国家,获得医疗资源的机会有限。早期检测和准确成像对于管理CVD至关重要,强调患者教育的重要性。生成人工智能(AI)包括合成文本的算法,演讲,images,以及在特定场景或提示下的组合,为加强患者教育提供了有希望的解决方案。通过结合视觉和语言模型,生成AI通过自然语言交互实现个性化多媒体内容生成,有益于心血管影像学的患者教育。模拟,基于聊天的互动,基于语音的界面可以增强可访问性,尤其是在资源有限的环境中。尽管有潜在的好处,在资源有限的国家实施生成式人工智能面临数据质量等挑战,基础设施限制,和道德考虑。解决这些问题对于成功采用至关重要。还必须克服与数据隐私和准确性相关的道德挑战,以确保更好的患者理解。治疗依从性,改善医疗保健结果。继续研究,创新,在生成AI中的合作有可能彻底改变患者的教育。这可以使患者对心血管健康做出明智的决定,最终在资源有限的环境中改善医疗保健结果。
    Cardiovascular disease (CVD) is a major cause of mortality worldwide, especially in resource-limited countries with limited access to healthcare resources. Early detection and accurate imaging are vital for managing CVD, emphasizing the significance of patient education. Generative artificial intelligence (AI), including algorithms to synthesize text, speech, images, and combinations thereof given a specific scenario or prompt, offers promising solutions for enhancing patient education. By combining vision and language models, generative AI enables personalized multimedia content generation through natural language interactions, benefiting patient education in cardiovascular imaging. Simulations, chat-based interactions, and voice-based interfaces can enhance accessibility, especially in resource-limited settings. Despite its potential benefits, implementing generative AI in resource-limited countries faces challenges like data quality, infrastructure limitations, and ethical considerations. Addressing these issues is crucial for successful adoption. Ethical challenges related to data privacy and accuracy must also be overcome to ensure better patient understanding, treatment adherence, and improved healthcare outcomes. Continued research, innovation, and collaboration in generative AI have the potential to revolutionize patient education. This can empower patients to make informed decisions about their cardiovascular health, ultimately improving healthcare outcomes in resource-limited settings.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:本文的目的是研究视觉叙事的结合,比如漫画和图形,使用DALL-E等生成人工智能(GAI)模型进行护理教育。
    背景:视觉叙事是在护理教育中传达复杂概念的有力方法。尽管他们的优势,由于需要图形设计专业知识以及相关的时间和资源限制,创建有效教育漫画的挑战仍然存在。
    方法:本研究考察了现有文献,这些文献突出了视觉叙事在教育中的功效,并展示了GAI模型的潜力,特别是DALL-E,为护理教育创造视觉叙事。
    方法:我们分析了GAI模型的潜力,特别是DALL-E,为教育目的创造视觉叙事。这通过解决敏感主题的说明性示例得到了证明,说明研究方法并设计临床试验招募海报。此外,我们讨论了审查和编辑DALL-E生成的文本以确保其在教育环境中的准确性和相关性的必要性。该方法还考虑了与生成内容的版权和所有权有关的法律问题,突出了这一领域不断演变的法律格局。
    结果:研究发现GAI,特别是DALL-E,在为护理教育创造视觉叙事方面具有巨大的潜力。在提供成本效益和可访问性的同时,GAI工具需要仔细考虑与文本相关的错误等挑战,对用户提示和法律问题的误解。
    结论:像DALL-E这样的GAI模型为增强护理教育中的视觉叙事提供了有希望的解决方案。然而,它们的有效整合需要协作方法,教育工作者作为副驾驶员使用这些工具,利用他们的能力,同时减轻潜在的缺点。通过这样做,教育工作者可以充分利用GAI的潜力,通过引人注目的视觉叙事丰富学习者的教育经验。
    OBJECTIVE: The aim of this paper is to investigate the incorporation of visual narratives, such as comics and graphics, into nursing education using Generative Artificial Intelligence (GAI) models like DALL-E.
    BACKGROUND: Visual narratives serve as a powerful method for communicating intricate concepts in nursing education. Despite their advantages, challenges in creating effective educational comics persist due to the need for expertise in graphic design and the associated time and resource constraints.
    METHODS: This study examines existing literature that highlights the efficacy of visual narratives in education and demonstrates the potential of GAI models, specifically DALL-E, in creating visual narratives for nursing education.
    METHODS: We analyze the potential of GAI models, specifically DALL-E, to create visual narratives for educational purposes. This was demonstrated through illustrative examples addressing sensitive topics, illustrating research methodology and designing recruitment posters for clinical trials. Additionally, we discussed the necessity of reviewing and editing the text generated by DALL-E to ensure its accuracy and relevance in educational contexts. The method also considered legal concerns related to copyright and ownership of the generated content, highlighting the evolving legal landscape in this domain.
    RESULTS: The study found that GAI, specifically DALL-E, has significant potential to bridge the gap in creating visual narratives for nursing education. While offering cost-effectiveness and accessibility, GAI tools require careful consideration of challenges such as text-related errors, misinterpretation of user prompts and legal concerns.
    CONCLUSIONS: GAI models like DALL-E offer promising solutions for enhancing visual storytelling in nursing education. However, their effective integration requires a collaborative approach, where educators engage with these tools as co-pilots, leveraging their capabilities while mitigating potential drawbacks. By doing so, educators can harness the full potential of GAI to enrich the educational experience for learners through compelling visual narratives.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号