Generative AI

创成式 AI
  • 文章类型: Editorial
    生成AI正在彻底改变肿瘤成像,加强癌症检测和诊断。这篇社论探讨了它对扩展数据集的影响,改善图像质量,并实现预测肿瘤学。我们讨论了道德考虑因素,并介绍了使用AI生成的数字双胞胎进行个性化癌症筛查的独特观点。这种方法可以优化筛选方案,改善早期检测,并制定治疗计划。虽然挑战依然存在,肿瘤成像中的生成AI为推进癌症护理和改善患者预后提供了前所未有的机会。
    Generative AI is revolutionizing oncological imaging, enhancing cancer detection and diagnosis. This editorial explores its impact on expanding datasets, improving image quality, and enabling predictive oncology. We discuss ethical considerations and introduce a unique perspective on personalized cancer screening using AI-generated digital twins. This approach could optimize screening protocols, improve early detection, and tailor treatment plans. While challenges remain, generative AI in oncological imaging offers unprecedented opportunities to advance cancer care and improve patient outcomes.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    暂无摘要。
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    ChatGPT的研究科学家有多好?我们系统地探讨了GPT-3.5和GPT-4在科学过程的四个中心组成部分的能力:作为研究图书馆管理员,研究伦理学家,数据生成器,和新型数据预测器,使用心理科学作为测试领域。在研究1(研究图书馆员)中,与人类研究人员不同,GPT-3.5和GPT-4出现幻觉,权威生成虚构参考文献的时间分别为36.0%和5.4%,分别,尽管GPT-4表现出不断发展的承认其虚构的能力。在研究2(研究伦理学家)中,GPT-4(尽管不是GPT-3.5)被证明能够在虚构的研究协议中检测到p-hacking之类的违规行为,纠正88.6%的公然提出的问题,72.6%的人巧妙地提出了问题。在研究3(数据生成器)中,这两个模型一致地复制了以前在大型语言语料库中发现的文化偏见模式,表明ChatGPT可以模拟已知结果,数据生成和假设生成等技能的有用性的先决条件。相反,在研究4(新型数据预测器)中,这两个模型都不能成功预测他们的训练数据中缺少的新结果,并且在预测更多与不那么新颖的结果。一起,这些结果表明,GPT是一个有缺陷但快速改进的图书馆员,一个体面的研究伦理学家,能够在具有已知特征的简单领域中生成数据,但在预测新的经验数据模式以帮助未来的实验方面表现不佳。
    How good a research scientist is ChatGPT? We systematically probed the capabilities of GPT-3.5 and GPT-4 across four central components of the scientific process: as a Research Librarian, Research Ethicist, Data Generator, and Novel Data Predictor, using psychological science as a testing field. In Study 1 (Research Librarian), unlike human researchers, GPT-3.5 and GPT-4 hallucinated, authoritatively generating fictional references 36.0% and 5.4% of the time, respectively, although GPT-4 exhibited an evolving capacity to acknowledge its fictions. In Study 2 (Research Ethicist), GPT-4 (though not GPT-3.5) proved capable of detecting violations like p-hacking in fictional research protocols, correcting 88.6% of blatantly presented issues, and 72.6% of subtly presented issues. In Study 3 (Data Generator), both models consistently replicated patterns of cultural bias previously discovered in large language corpora, indicating that ChatGPT can simulate known results, an antecedent to usefulness for both data generation and skills like hypothesis generation. Contrastingly, in Study 4 (Novel Data Predictor), neither model was successful at predicting new results absent in their training data, and neither appeared to leverage substantially new information when predicting more vs. less novel outcomes. Together, these results suggest that GPT is a flawed but rapidly improving librarian, a decent research ethicist already, capable of data generation in simple domains with known characteristics but poor at predicting novel patterns of empirical data to aid future experimentation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    SunoCaps数据集旨在为音乐数据提供创新贡献。人造音乐作品的专家描述,从广泛使用的MusicCaps数据集中,用作生成此数据集的完整歌曲的提示。这种自动音乐生成是通过最先进的基于音频的音乐的Suno生成器完成的。目前包括来自MusicCaps的64件作品的子集,总共生成了256个条目。这个总数源于为每个人类作品生成四个不同的变化;基于原始标题的两个版本和基于原始方面描述的两个版本。作为AI生成的音乐数据集,SunoCaps还包括基于专家的即时对齐信息,注释了提示和最终生成之间的主要区别。此外,描述由作品引起的主要离散情绪的注释。此数据集可以有一系列实现,例如创建和改进音乐生成验证工具,多层架构的训练系统和音乐情感估计系统的优化。
    The SunoCaps dataset aims to provide an innovative contribution to music data. Expert description of human-made musical pieces, from the widely used MusicCaps dataset, are used as prompts for generating complete songs for this dataset. This Automatic Music Generation is done with the state-of-the-art Suno generator of audio-based music. A subset of 64 pieces from MusicCaps is currently included, with a total of 256 generated entries. This total stems from generating four different variations for each human piece; two versions based on the original caption and two versions based on the original aspect description. As an AI-generated music dataset, SunoCaps also includes expert-based information on prompt alignment, with the main differences between prompt and final generation annotated. Furthermore, annotations describing the main discrete emotions induced by the piece. This dataset can have an array of implementations, such as creating and improving music generation validation tools, training systems for multi-layered architectures and the optimization of music emotion estimation systems.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:评估人工智能生成的医疗案例的准确性和教育效用,特别是由ChatGPT-4(由OpenAI开发)等大型语言模型生成的模型,是至关重要的,但未被充分开发。
    目的:本研究旨在评估ChatGPT-4生成的临床小插曲的教育效用及其在教育环境中的适用性。
    方法:使用收敛混合方法设计,2024年1月8日至28日进行了一项基于网络的调查,以评估ChatGPT-4在日语中产生的18例医疗病例.在调查中,使用6个主要问题项目来评估生成的临床小插曲的质量及其教育效用,这是信息质量,信息准确性,教育有用性,临床匹配,术语准确性(TA),和诊断困难。反馈是由专门从事普通内科或普通医学并且在医学教育方面经验丰富的医生征求的。进行卡方检验和Mann-WhitneyU检验以确定病例之间的差异,线性回归用于检查与医师经验相关的趋势。对定性反馈进行了主题分析,以确定需要改进的地方并确认案例的教育效用。
    结果:在邀请的73名参与者中,71(97%)回答。受访者,主要是男性(64/71,90%),跨越广泛的实践年(从1976年到2017年),并代表了日本各地不同的医院规模。大多数人认为信息质量(平均0.77,95%CI0.75-0.79)和信息准确性(平均0.68,95%CI0.65-0.71)令人满意,这些响应基于二进制数据。教育有用性的平均分数为3.55(95%CI3.49-3.60),临床匹配为3.70(95%CI3.65-3.75),TA的3.49(95%CI3.44-3.55),诊断难度为2.34(95%CI2.28-2.40),基于5分的李克特量表。统计学分析显示,不同病例的内容质量和相关性存在显著差异(Bonferroni校正后P<.001)。参与者建议改善身体发现,使用自然语言,增强医学TA。专题分析强调需要更清晰的文件,临床信息一致性,内容相关性,和以病人为中心的病例介绍。
    结论:ChatGPT-4生成的日语医学案例作为医学教育资源具有相当大的潜力,在质量和准确性方面具有公认的充分性。然而,有一个显著的需要,以提高精度和真实性的情况下的细节。本研究强调了ChatGPT-4作为医学领域辅助教育工具的价值,需要专家监督才能实现最佳应用。
    BACKGROUND: Evaluating the accuracy and educational utility of artificial intelligence-generated medical cases, especially those produced by large language models such as ChatGPT-4 (developed by OpenAI), is crucial yet underexplored.
    OBJECTIVE: This study aimed to assess the educational utility of ChatGPT-4-generated clinical vignettes and their applicability in educational settings.
    METHODS: Using a convergent mixed methods design, a web-based survey was conducted from January 8 to 28, 2024, to evaluate 18 medical cases generated by ChatGPT-4 in Japanese. In the survey, 6 main question items were used to evaluate the quality of the generated clinical vignettes and their educational utility, which are information quality, information accuracy, educational usefulness, clinical match, terminology accuracy (TA), and diagnosis difficulty. Feedback was solicited from physicians specializing in general internal medicine or general medicine and experienced in medical education. Chi-square and Mann-Whitney U tests were performed to identify differences among cases, and linear regression was used to examine trends associated with physicians\' experience. Thematic analysis of qualitative feedback was performed to identify areas for improvement and confirm the educational utility of the cases.
    RESULTS: Of the 73 invited participants, 71 (97%) responded. The respondents, primarily male (64/71, 90%), spanned a broad range of practice years (from 1976 to 2017) and represented diverse hospital sizes throughout Japan. The majority deemed the information quality (mean 0.77, 95% CI 0.75-0.79) and information accuracy (mean 0.68, 95% CI 0.65-0.71) to be satisfactory, with these responses being based on binary data. The average scores assigned were 3.55 (95% CI 3.49-3.60) for educational usefulness, 3.70 (95% CI 3.65-3.75) for clinical match, 3.49 (95% CI 3.44-3.55) for TA, and 2.34 (95% CI 2.28-2.40) for diagnosis difficulty, based on a 5-point Likert scale. Statistical analysis showed significant variability in content quality and relevance across the cases (P<.001 after Bonferroni correction). Participants suggested improvements in generating physical findings, using natural language, and enhancing medical TA. The thematic analysis highlighted the need for clearer documentation, clinical information consistency, content relevance, and patient-centered case presentations.
    CONCLUSIONS: ChatGPT-4-generated medical cases written in Japanese possess considerable potential as resources in medical education, with recognized adequacy in quality and accuracy. Nevertheless, there is a notable need for enhancements in the precision and realism of case details. This study emphasizes ChatGPT-4\'s value as an adjunctive educational tool in the medical field, requiring expert oversight for optimal application.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    全球精神健康担忧率正在上升,人们越来越认识到,现有的精神卫生保健模式将无法充分扩展以满足需求。随着大型语言模型(LLM)的出现,他们对创造小说的承诺非常乐观,支持心理健康的大规模解决方案。尽管他们的巢穴,LLM已经应用于与心理健康相关的任务。在本文中,我们总结了现有文献中关于使用LLM提供心理健康教育的努力,评估,和干预,并强调在每个领域产生积极影响的关键机会。然后,我们强调与LLM应用于心理健康相关的风险,并鼓励采取策略来减轻这些风险。对精神卫生支持的迫切需要必须与负责任的发展相平衡,测试,anddeploymentofmentalhealthLLM.ItisespeciallycriticaltoensurethatmentalhealthLLMarefine-tunedformentalhealth,加强心理健康公平,并遵守道德标准,包括那些有精神健康问题的人,参与从开发到部署的所有阶段。优先考虑这些努力将最大限度地减少对心理健康的潜在危害,并最大限度地提高LLM对全球心理健康产生积极影响的可能性。
    UNASSIGNED: Global rates of mental health concerns are rising, and there is increasing realization that existing models of mental health care will not adequately expand to meet the demand. With the emergence of large language models (LLMs) has come great optimism regarding their promise to create novel, large-scale solutions to support mental health. Despite their nascence, LLMs have already been applied to mental health-related tasks. In this paper, we summarize the extant literature on efforts to use LLMs to provide mental health education, assessment, and intervention and highlight key opportunities for positive impact in each area. We then highlight risks associated with LLMs\' application to mental health and encourage the adoption of strategies to mitigate these risks. The urgent need for mental health support must be balanced with responsible development, testing, and deployment of mental health LLMs. It is especially critical to ensure that mental health LLMs are fine-tuned for mental health, enhance mental health equity, and adhere to ethical standards and that people, including those with lived experience with mental health concerns, are involved in all stages from development through deployment. Prioritizing these efforts will minimize potential harms to mental health and maximize the likelihood that LLMs will positively impact mental health globally.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人口统计,健康的社会决定因素,越来越多地研究电子健康记录中的非结构化文本中记录的家族史,以了解如何将这些信息与结构化数据一起使用以改善医疗保健结果。GPT模型发布后,许多研究已经应用GPT模型从叙述性临床笔记中提取这些信息。不同于现有的工作,我们的研究重点是通过向GPT模型提供最少的信息来研究在一起提取这些信息时的零镜头学习.我们利用针对人口统计注释的去识别的真实世界临床笔记,各种社会决定因素,和家族史信息。鉴于GPT模型可能提供与原始数据中的文本不同的文本,我们探索了两组评估指标,包括传统的NER评价指标和语义相似度评价指标,完全理解表演。我们的结果表明,GPT-3.5方法在人口统计学提取上平均达到0.975F1,关于社会决定因素提取的0.615F1,家族史提取0.722F1。我们相信这些结果可以通过模型微调或少量学习得到进一步改善。通过案例研究,我们还确定了GPT模型的局限性,这需要在未来的研究中解决。
    Demographics, social determinants of health, and family history documented in the unstructured text within the electronic health records are increasingly being studied to understand how this information can be utilized with the structured data to improve healthcare outcomes. After the GPT models were released, many studies have applied GPT models to extract this information from the narrative clinical notes. Different from the existing work, our research focuses on investigating the zero-shot learning on extracting this information together by providing minimum information to the GPT model. We utilize de-identified real-world clinical notes annotated for demographics, various social determinants, and family history information. Given that the GPT model might provide text different from the text in the original data, we explore two sets of evaluation metrics, including the traditional NER evaluation metrics and semantic similarity evaluation metrics, to completely understand the performance. Our results show that the GPT-3.5 method achieved an average of 0.975 F1 on demographics extraction, 0.615 F1 on social determinants extraction, and 0.722 F1 on family history extraction. We believe these results can be further improved through model fine-tuning or few-shots learning. Through the case studies, we also identified the limitations of the GPT models, which need to be addressed in future research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    训练数据推动和塑造人工智能(AI)模型的发展。密集的数据需求是限制AI工具在固有数据稀缺的行业中成功的主要瓶颈。在医疗保健方面,训练数据很难策划,引发了越来越多的担忧,即当前贫困社会群体无法获得医疗保健将转化为未来医疗保健AI的偏见。在这份报告中,我们开发了一个自动编码器来增长和增强固有的稀缺数据集,以减轻我们对大数据的依赖。
    使用开源数据的计算研究。
    数据来自6个开源数据集,包括新加坡40-80岁的患者,中国,印度,和西班牙。
    报告的框架基于真实世界的患者成像数据生成合成图像。作为一个测试用例,我们使用自动编码器来扩展公开可用的光盘照片训练集,并评估所得数据集训练AI模型检测青光眼视神经病变的能力。
    接收器工作特征曲线(AUC)下的面积用于评估青光眼检测器的性能。更高的AUC指示更好的检测性能。
    结果表明,使用自动编码器生成的合成图像增强数据集导致了出色的训练集,从而提高了AI模型的性能。
    我们的发现有助于解决AI模型开发日益站不住脚的数据量和质量要求,并具有超出医疗保健的影响。为所有类似的数据挑战领域授权AI采用。
    作者对本文讨论的任何材料都没有专有或商业利益。
    UNASSIGNED: Training data fuel and shape the development of artificial intelligence (AI) models. Intensive data requirements are a major bottleneck limiting the success of AI tools in sectors with inherently scarce data. In health care, training data are difficult to curate, triggering growing concerns that the current lack of access to health care by under-privileged social groups will translate into future bias in health care AIs. In this report, we developed an autoencoder to grow and enhance inherently scarce datasets to alleviate our dependence on big data.
    UNASSIGNED: Computational study with open-source data.
    UNASSIGNED: The data were obtained from 6 open-source datasets comprising patients aged 40-80 years in Singapore, China, India, and Spain.
    UNASSIGNED: The reported framework generates synthetic images based on real-world patient imaging data. As a test case, we used autoencoder to expand publicly available training sets of optic disc photos, and evaluated the ability of the resultant datasets to train AI models in the detection of glaucomatous optic neuropathy.
    UNASSIGNED: Area under the receiver operating characteristic curve (AUC) were used to evaluate the performance of the glaucoma detector. A higher AUC indicates better detection performance.
    UNASSIGNED: Results show that enhancing datasets with synthetic images generated by autoencoder led to superior training sets that improved the performance of AI models.
    UNASSIGNED: Our findings here help address the increasingly untenable data volume and quality requirements for AI model development and have implications beyond health care, toward empowering AI adoption for all similarly data-challenged fields.
    UNASSIGNED: The authors have no proprietary or commercial interest in any materials discussed in this article.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    技术改变了对人类智力和创造力的感知以及智力和创造力的实际过程。曾经对人类智力很重要的技能,例如,计算的,不再像计算机时代之前那样重视。计算机的优势在于,它们可能会使我们专注于我们认为比它们所取代的东西更重要的东西。在笔法的情况下,拼写,或算术计算,这样的论点可以结出果实。但就人类的创造力而言,创造性技能和态度的丧失可能是人类的长期损失。生成型AI是可复制的。它可以重新组合和重新排序的想法,但目前还不清楚它是否会产生世界现在需要的那种打破范式的想法来解决它面临的严重问题,例如全球气候变化,污染,暴力,收入差距不断扩大,和爬行的专制。
    Technology alters both perceptions of human intelligence and creativity and the actual processes of intelligence and creativity. Skills that were once important for human intelligence, for example, computational ones, no longer hold anywhere near the same importance they did before the age of computers. The advantage of computers is that they may lead us to focus on what we believe to be more important things than what they have replaced. In the case of penmanship, spelling, or arithmetic computation, such an argument could bear fruit. But in the case of human creativity, the loss of creative skills and attitudes may be a long-term loss to humanity. Generative AI is replicative. It can recombine and re-sort ideas, but it is not clear that it will generate the kinds of paradigm-breaking ideas the world needs right now to solve the serious problems that confront it, such as global climate change, pollution, violence, increasing income disparities, and creeping autocracy.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本文论证了一种新的,有希望的方法使用生成人工智能(AI)来增强电子教科书和研究论文(本地存储在用户机器上)的教育价值,并最大限度地发挥其自学潜力,超出了所有这些教科书和文件中已经提供的标准电子搜索和索引。提出的方法完全在用户的机器上本地运行,通常是负担得起的,并且不需要很高的技术专长来设置和定制用户自己的内容。
    This paper demonstrates a new, promising method using generative artificial intelligence (AI) to augment the educational value of electronic textbooks and research papers (locally stored on user\'s machine) and maximize their potential for self-study, in a way that goes beyond the standard electronic search and indexing that is already available in all of these textbooks and files. The presented method runs fully locally on the user\'s machine, is generally affordable, and does not require high technical expertise to set up and customize with the user\'s own content.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号