Artificial intelligence

人工智能
  • 文章类型: Journal Article
    目的:在眼科实践中,使用电子健康记录(EHR)收集的数据量迅速增加。人工智能(AI)提供了一种集中数据收集和分析的有前途的手段,但迄今为止,大多数人工智能算法仅应用于眼科实践中的图像数据分析。在这篇综述中,我们旨在描述人工智能在EHR分析中的应用,并严格评估每个纳入研究对CONSORT-AI报告指南的依从性。
    方法:对三个相关数据库(MEDLINE,EMBASE,和Cochrane图书馆)于2010年1月至2023年2月进行。根据CONSORT-AI报告指南中的AI特定项目,对纳入研究的报告质量进行了评估。
    结果:在我们搜索的4,968篇文章中,89项研究符合所有纳入标准,被纳入本综述。大多数研究利用人工智能进行眼部疾病预测(n=41,46.1%),糖尿病性视网膜病变是研究最多的眼部病理(n=19,21.3%)。14个测量项目的总体平均CONSORT-AI评分为12.1(范围8-14,中位数12)。依从率最低的类别是:描述处理质量差的数据(48.3%),指定参与者纳入和排除标准(56.2%),并详细说明对AI干预或其代码的访问,包括任何限制(62.9%)。
    结论:结论:我们已经发现人工智能在眼科诊所中被显著地用于疾病预测,然而,这些算法由于缺乏通用性和跨中心可重复性而受到限制。应制定AI报告的标准化框架,改善人工智能在眼科疾病管理和眼科决策中的应用。
    OBJECTIVE: In the context of ophthalmologic practice, there has been a rapid increase in the amount of data collected using electronic health records (EHR). Artificial intelligence (AI) offers a promising means of centralizing data collection and analysis, but to date, most AI algorithms have only been applied to analyzing image data in ophthalmologic practice. In this review we aimed to characterize the use of AI in the analysis of EHR, and to critically appraise the adherence of each included study to the CONSORT-AI reporting guideline.
    METHODS: A comprehensive search of three relevant databases (MEDLINE, EMBASE, and Cochrane Library) from January 2010 to February 2023 was conducted. The included studies were evaluated for reporting quality based on the AI-specific items from the CONSORT-AI reporting guideline.
    RESULTS: Of the 4,968 articles identified by our search, 89 studies met all inclusion criteria and were included in this review. Most of the studies utilized AI for ocular disease prediction (n = 41, 46.1%), and diabetic retinopathy was the most studied ocular pathology (n = 19, 21.3%). The overall mean CONSORT-AI score across the 14 measured items was 12.1 (range 8-14, median 12). Categories with the lowest adherence rates were: describing handling of poor quality data (48.3%), specifying participant inclusion and exclusion criteria (56.2%), and detailing access to the AI intervention or its code, including any restrictions (62.9%).
    CONCLUSIONS: In conclusion, we have identified that AI is prominently being used for disease prediction in ophthalmology clinics, however these algorithms are limited by their lack of generalizability and cross-center reproducibility. A standardized framework for AI reporting should be developed, to improve AI applications in the management of ocular disease and ophthalmology decision making.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: English Abstract
    Blood cell morphological examination is a crucial method for the diagnosis of blood diseases, but traditional manual microscopy is characterized by low efficiency and susceptibility to subjective biases. The application of artificial intelligence (AI) technology has improved the efficiency and quality of blood cell examinations and facilitated the standardization of test results. Currently, a variety of AI devices are either in clinical use or under research, with diverse technical requirements and configurations. The Experimental Diagnostic Study Group of the Hematology Branch of the Chinese Medical Association has organized a panel of experts to formulate this consensus. The consensus covers term definitions, scope of application, technical requirements, clinical application, data management, and information security. It emphasizes the importance of specimen preparation, image acquisition, image segmentation algorithms, and cell feature extraction and classification, and sets forth basic requirements for the cell recognition spectrum. Moreover, it provides detailed explanations regarding the fine classification of pathological cells, requirements for cell training and testing, quality control standards, and assistance in issuing diagnostic reports by humans. Additionally, the consensus underscores the significance of data management and information security to ensure the safety of patient information and the accuracy of data.
    血细胞形态学检查是血液疾病诊断的重要手段,传统人工镜检存在效率低、易受主观影响等问题。人工智能技术的应用提高了血细胞检查的效率和质量,促进了检测结果的标准化。目前,多种人工智能设备已在临床使用或研究中,但其技术要求和配置各异。中华医学会血液学分会实验诊断学组组织相关专家,制订了本共识。本共识内容包括术语定义、适用范围、技术要求、临床应用、数据管理和信息安全,强调了标本制备、图像采集、图像分割算法、细胞特征提取与识别分类的重要性,并提出了细胞识别谱的基本要求。同时,对病理细胞的细分类、细胞训练与测试的要求、质量控制标准、辅助人工出具诊断报告等方面进行了详细说明。此外,还强调了数据管理和信息安全的重要性,确保患者信息的安全和数据的准确性。.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    人工智能(AI)和数字创新正在改变医疗保健。图像分析中的机器学习等技术,医疗聊天机器人和电子病历提取中的自然语言处理有可能改善筛查,诊断和预测,导致精准医疗和预防健康。然而,至关重要的是,确保人工智能研究以科学严谨的方式进行,以促进临床实施。因此,报告指南已经制定,以标准化和简化健康人工智能技术的开发和验证。这篇评论提出了一种结构化的方法,利用这些报告指南将有前途的人工智能技术从研究和开发转化为临床翻译。并最终从长凳到床边广泛实施。
    Artificial intelligence (AI) and digital innovation are transforming healthcare. Technologies such as machine learning in image analysis, natural language processing in medical chatbots and electronic medical record extraction have the potential to improve screening, diagnostics and prognostication, leading to precision medicine and preventive health. However, it is crucial to ensure that AI research is conducted with scientific rigour to facilitate clinical implementation. Therefore, reporting guidelines have been developed to standardise and streamline the development and validation of AI technologies in health. This commentary proposes a structured approach to utilise these reporting guidelines for the translation of promising AI techniques from research and development into clinical translation, and eventual widespread implementation from bench to bedside.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    压力性尿失禁(SUI)影响着全世界无数的女性。鉴于ChatGPT的日益普及,患者可能会转向平台寻求SUI建议。我们的目标是评估来自ChatGPT平台的SUI临床信息的质量。
    关于SUI的最多的患者问题来自社会网站和论坛的患者材料,并使用ChatGPT3.5进行查询。ChatGPT的回应被汇编成一项调查,并分发给3名AUA指南委员会成员,他们制定了女性SUI手术管理指南。他们被要求对回答的可靠性进行评级,可理解性,质量,使用DISCERN和患者教育材料评估工具标准化问卷的可操作性。使用4点Likert量表评估准确性,并使用FleschReadingEase评分评估可读性。
    总体材料被评为中等至中等质量(DISCERN=3.73/5),具有潜在的重要但没有严重的缺点。可靠性和质量分别为63%和75%。可理解性为89%,可操作性18%,准确率为88%。所有问题域的评级均为中等或更好。所有领域的可操作性都很差。每个回答都是“难以阅读”,翻译成大学毕业生的阅读水平。
    如果患者将其用于辅助医疗指导,泌尿科社区应严格评估该平台的输出。AUA委员会成员,他们是该领域的专家,ChatGPT在SUI上产生的响应率为中等至中等高质量,中等可靠性,优秀的可理解性,使用标准化问卷的可操作性较差。材料的阅读水平提高了,这是一个潜在的改进领域,可以使生成的响应更容易理解。
    UNASSIGNED: Stress urinary incontinence (SUI) affects countless women worldwide. Given ChatGPT\'s rising ubiquity, patients may turn to the platform for SUI advice. Our objective was to evaluate the quality of clinical information about SUI from the ChatGPT platform.
    UNASSIGNED: The most-asked patient questions regarding SUI were derived from patient materials from societal websites and forums, and queried using ChatGPT 3.5. The responses from ChatGPT were compiled into a survey and disseminated to 3 AUA guideline committee members who developed the Surgical Management of Female SUI guidelines. They were asked to grade responses on reliability, understandability, quality, and actionability using DISCERN and Patient Education Materials Assessment Tool standardized questionnaires. Accuracy was assessed with a 4-point Likert scale and readability using Flesch Reading Ease score.
    UNASSIGNED: The overall material was rated as moderate to moderately high quality (DISCERN = 3.73/5) with potentially important but no serious shortcomings. Reliability and quality were reported to be 63% and 75%. Understandability was 89%, actionability 18%, and accuracy 88%. All question domains were rated at moderate or better. Actionability was poor in all domains. Every response was \"hard to read\" translating to a college graduate reading level.
    UNASSIGNED: The urologic community should critically evaluate this platform\'s output if patients are to use it for adjunctive medical guidance. AUA committee members, who are experts in the field, rate ChatGPT-produced responses on SUI as moderate to moderately high quality, moderate reliability, excellent understandability, and poor actionability utilizing standardized questionnaires. The reading level of the material was advanced, which is an area of potential improvement to make generated responses more comprehensible.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Letter
    在Daungsupawong等人最近给编辑的一封信中。在美容整形外科,标题为“关于上睑下垂实用指南的ChatGPT和临床问题:对应,“作者强调了关于输入和输出参考之间输入语言差异的重要观点。然而,高级版本,如GPT-4,已经显示出英语和汉语输入之间的边际差异,可能是因为使用了更大的训练数据。为了解决这个问题,已经开发了面向非英语语言的大型语言模型(LLM)。LLM引用现有参考文献的能力各不相同,有了较新的型号,例如GPT-4,显示出比GPT-3.5更高的参考率。未来的研究应专注于解决当前的局限性,并提高新兴的LLM在为多种语言的医学问题提供准确和翔实的答案方面的有效性。证据级别V本期刊要求作者为每篇文章分配一个级别的证据。对于这些循证医学评级的完整描述,请参阅目录或在线作者说明www。springer.com/00266.
    In a recent Letter to the Editor authored by Daungsupawong et al. in Aesthetic Plastic Surgery, titled \"ChatGPT and Clinical Questions on the Practical Guideline of Blepharoptosis: Correspondence,\" the authors emphasized important points regarding the input language differences between input and output references. However, advanced versions, such as GPT-4, have shown marginal differences between English and Chinese inputs, possibly because of the use of larger training data. To address this issue, non-English-language-oriented large language models (LLMs) have been developed. The ability of LLMs to refer to existing references varies, with newer models, such as GPT-4, showing higher reference rates than GPT-3.5. Future research should focus on addressing the current limitations and enhancing the effectiveness of emerging LLMs in providing accurate and informative answers to medical questions across multiple languages.Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:如果使用得当,人工智能生成内容(AIGC)可能会改善研究的几乎每个方面,从数据收集到综合。然而,如果使用不当,AIGC的使用可能导致不准确信息的传播,并引入潜在的道德问题。研究设计:横截面。研究样本:65种顶级外科期刊。数据收集:向每个期刊提交指南和门户查询有关AIGC使用的指南。结果:我们发现,2023年7月,排名前65位的外科期刊中有60%引入了使用指南,引入指南的外科期刊(68%)多于外科亚专业期刊(52.5%),包括耳鼻喉科(40%)。此外,在39个有指导方针的国家中,只有69.2%给出了具体的使用指南。不包括日记,在分析的时候,明确禁止使用AIGC。结论:总之,这些数据表明,尽管许多期刊对AIGC的使用迅速做出了反应,这些准则的质量仍然是可变的。这应该在学术界先发制人地解决。
    Background: When properly utilized, artificial intelligence generated content (AIGC) may improve virtually every aspect of research, from data gathering to synthesis. Nevertheless, when used inappropriately, the use of AIGC may lead to the dissemination of inaccurate information and introduce potential ethical concerns.Research Design: Cross-sectional. Study Sample: 65 top surgical journals. Data Collection: Each journals submission guidelines and portal was queried for guidelines regarding AIGC use.Results: We found that, in July 2023, 60% of the top 65 surgical journals had introduced guidelines for use, with more surgical journals (68%) introducing guidelines than surgical subspecialty journals (52.5%), including otolaryngology (40%). Furthermore, of the 39 with guidelines, only 69.2% gave specific use guidelines. No included journal, at the time of analysis, explicitly disallowed AIGC use.Conclusions: Altogether, this data suggests that while many journals have quickly reacted to AIGC usage, the quality of such guidelines is still variable. This should be pre-emptively addressed within academia.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:放射治疗中的人工智能(AI)模型正在以越来越快的速度发展。尽管如此,放射治疗界尚未在临床实践中广泛采用这些模型。关于如何发展的有凝聚力的指导方针,报告和临床验证AI算法可能有助于弥合这一差距。
    方法:遵循所有合著者的Delphi过程,以确定在此综合指南中应该解决哪些主题。指南的单独部分,包括语句,由作者的小组撰写,并在几次会议上与整个小组进行了讨论。陈述被制定并被评分为高度推荐或推荐。
    结果:发现以下主题最相关:决策,图像分析,体积分割,治疗计划,患者特定的治疗质量保证,适应性治疗,结果预测,培训,AI模型参数的验证和测试,模型可用性供其他人验证,模型质量保证/更新和升级,道德。给出了关键参考文献,并展望了当前的障碍和克服这些障碍的可能性。编写了19份声明。
    结论:已经编写了一个有凝聚力的指南,该指南涉及放射治疗中有关AI的主要主题。有助于指导发展,以及新AI工具的透明和一致的报告和验证,并促进采用。
    OBJECTIVE: Artificial Intelligence (AI) models in radiation therapy are being developed with increasing pace. Despite this, the radiation therapy community has not widely adopted these models in clinical practice. A cohesive guideline on how to develop, report and clinically validate AI algorithms might help bridge this gap.
    METHODS: A Delphi process with all co-authors was followed to determine which topics should be addressed in this comprehensive guideline. Separate sections of the guideline, including Statements, were written by subgroups of the authors and discussed with the whole group at several meetings. Statements were formulated and scored as highly recommended or recommended.
    RESULTS: The following topics were found most relevant: Decision making, image analysis, volume segmentation, treatment planning, patient specific quality assurance of treatment delivery, adaptive treatment, outcome prediction, training, validation and testing of AI model parameters, model availability for others to verify, model quality assurance/updates and upgrades, ethics. Key references were given together with an outlook on current hurdles and possibilities to overcome these. 19 Statements were formulated.
    CONCLUSIONS: A cohesive guideline has been written which addresses main topics regarding AI in radiation therapy. It will help to guide development, as well as transparent and consistent reporting and validation of new AI tools and facilitate adoption.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    虽然支持人工智能(AI)的技术继续快速发展,关于人工智能的有益产出和对医疗保健中人机交互挑战的担忧越来越多。为了解决这些问题,机构越来越多地诉诸于发布医疗保健人工智能指南,旨在使AI与道德实践保持一致。然而,可以分析作为书面语言形式的指南,以识别其文本交流与潜在的社会观念之间的相互联系。从这个角度来看,我们进行了语篇分析,以了解这些指南是如何构建的,口齿清晰,并为医疗保健中的人工智能构建伦理。我们纳入了八项指导方针,并确定了三个普遍和交织的话语:(1)人工智能是不可避免的和可取的;(2)人工智能需要以(某些形式的)原则指导(3)对人工智能的信任是工具性和主要的。这些话语标志着技术理想对AI伦理的过度溢出,比如过度乐观和由此产生的过度批评。这项研究提供了对AI指南中存在的基本思想的见解,以及指南如何影响AI的实践和伦理,legal,和社会价值有望塑造医疗保健领域的人工智能。
    While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI\'s beneficial outputs and concerns about the challenges of human-computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. We included eight guidelines and identified three prevalent and interwoven discourses: (1) AI is unavoidable and desirable; (2) AI needs to be guided with (some forms of) principles (3) trust in AI is instrumental and primary. These discourses signal an over-spillage of technical ideals to AI ethics, such as over-optimism and resulting hyper-criticism. This research provides insights into the underlying ideas present in AI guidelines and how guidelines influence the practice and alignment of AI with ethical, legal, and societal values expected to shape AI in healthcare.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:近年来,生成人工智能模型,比如ChatGPT,越来越多地用于医疗保健。尽管承认AI模型在快速访问资源和制定对临床问题的回应方面具有很高的潜力,使用这些模型获得的结果仍需要通过与已建立的临床指南进行比较来验证.这项研究将AI模型对八个临床问题的响应与意大利医学肿瘤协会(AIOM)卵巢癌指南进行了比较。
    方法:作者使用Delphi方法评估ChatGPT和AIOM指南的反应。一个由医疗保健专业人员组成的专家小组根据清晰度评估了回应,一致性,全面性,可用性,和质量使用五点李克特量表。等级方法评估了证据质量和建议的强度。
    结果:一项涉及14名医生的调查显示,与AI模型相比,AIOM指南的平均得分始终较高。具有统计学上的显著差异。事后测试表明,AIOM指南与所有AI模型都有很大不同,人工智能模型之间没有显著差异。
    结论:虽然AI模型可以提供快速反应,它们必须符合关于清晰度的既定临床指南,一致性,全面性,可用性,和质量。这些发现强调了在临床决策中依赖专家制定的指南的重要性,并强调了AI模型改进的潜在领域。
    In recent years, generative Artificial Intelligence models, such as ChatGPT, have increasingly been utilized in healthcare. Despite acknowledging the high potential of AI models in terms of quick access to sources and formulating responses to a clinical question, the results obtained using these models still require validation through comparison with established clinical guidelines. This study compares the responses of the AI model to eight clinical questions with the Italian Association of Medical Oncology (AIOM) guidelines for ovarian cancer.
    The authors used the Delphi method to evaluate responses from ChatGPT and the AIOM guidelines. An expert panel of healthcare professionals assessed responses based on clarity, consistency, comprehensiveness, usability, and quality using a five-point Likert scale. The GRADE methodology assessed the evidence quality and the recommendations\' strength.
    A survey involving 14 physicians revealed that the AIOM guidelines consistently scored higher averages compared to the AI models, with a statistically significant difference. Post hoc tests showed that AIOM guidelines significantly differed from all AI models, with no significant difference among the AI models.
    While AI models can provide rapid responses, they must match established clinical guidelines regarding clarity, consistency, comprehensiveness, usability, and quality. These findings underscore the importance of relying on expert-developed guidelines in clinical decision-making and highlight potential areas for AI model improvement.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:供肝严重的大泡性脂肪变性与原发性移植物功能障碍有关。班夫肝脏同种异体移植病理学工作组提出了关于供体肝活检标本脂肪变性评估的建议,并达成了定义“大液滴脂肪”(LDF)和3步算法方法的共识。
    方法:我们从2010年至2021年的潜在肝供者活检标本中检索了幻灯片和初始病理报告。按照班夫的方法,我们重新评估了LDF脂肪变性,并采用了计算机辅助人工定量方案和人工智能(AI)模型进行分析.
    结果:在来自88位捐献者的113张幻灯片中,88.5%(100/113)的载玻片中报告无至轻度(<33%)的大泡性脂肪变性;8.8%(10/113)最初报告为至少中度脂肪变性(≥33%).随后的病理评估,按照班夫的建议,显示所有幻灯片的LDF低于33%,这一发现通过计算机辅助手动量化和AI模型得到证实。病理学家和计算机辅助手动量化之间的相关系数,在计算机辅助手动量化和人工智能模型之间,AI模型和病理学家分别为0.94、0.88和0.81(全部P<0.0001)。
    结论:在评估供体肝脏脂肪变性时,可以遵循班夫肝脏同种异体移植病理学工作组提出的3步方法。AI模型可以提供肝脏脂肪变性的快速客观评估。
    OBJECTIVE: Severe macrovesicular steatosis in donor livers is associated with primary graft dysfunction. The Banff Working Group on Liver Allograft Pathology has proposed recommendations for steatosis assessment of donor liver biopsy specimens with a consensus for defining \"large droplet fat\" (LDF) and a 3-step algorithmic approach.
    METHODS: We retrieved slides and initial pathology reports from potential liver donor biopsy specimens from 2010 to 2021. Following the Banff approach, we reevaluated LDF steatosis and employed a computer-assisted manual quantification protocol and artificial intelligence (AI) model for analysis.
    RESULTS: In a total of 113 slides from 88 donors, no to mild (<33%) macrovesicular steatosis was reported in 88.5% (100/113) of slides; 8.8% (10/113) was reported as at least moderate steatosis (≥33%) initially. Subsequent pathology evaluation, following the Banff recommendation, revealed that all slides had LDF below 33%, a finding confirmed through computer-assisted manual quantification and an AI model. Correlation coefficients between pathologist and computer-assisted manual quantification, between computer-assisted manual quantification and the AI model, and between the AI model and pathologist were 0.94, 0.88, and 0.81, respectively (P < .0001 for all).
    CONCLUSIONS: The 3-step approach proposed by the Banff Working Group on Liver Allograft Pathology may be followed when evaluating steatosis in donor livers. The AI model can provide a rapid and objective assessment of liver steatosis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号