AI risk

  • 文章类型: Journal Article
    人工智能合成生物学具有巨大的潜力,但也显著增加了生物多样性,并带来了一系列新的双重用途问题。考虑到通过结合新兴技术而设想的巨大创新,情况很复杂,人工智能合成生物学有可能将生物工程扩展到工业生物制造。然而,文献综述表明,诸如保持合理的创新范围等目标,或者更雄心勃勃地培育巨大的生物经济不一定与生物安全形成对比,但需要手牵手。本文介绍了有关问题的文献综述,并描述了跨命令与控制选项的新兴政策和实践框架,管理,自下而上,和自由放任的治理。如何实现预警系统,使实验室能够预防和缓解未来人工智能生物危害,从故意滥用,或者来自公共领域,会不断需要进化,和适应性,交互式方法应该出现。尽管Biorisk受既定治理制度的约束,科学家们普遍遵守生物安全协议,甚至是实验性的,但是科学家的合法使用可能会导致意想不到的发展。由生成AI实现的聊天机器人的最新进展重新引发了人们的担忧,即先进的生物洞察力更容易落入恶性个人或组织手中。鉴于这些问题,社会需要重新思考人工智能合成生物学应该如何管理。想象当前挑战的建议方法是打痣治理,尽管新兴的解决方案可能也没有太大不同。
    AI-enabled synthetic biology has tremendous potential but also significantly increases biorisks and brings about a new set of dual use concerns. The picture is complicated given the vast innovations envisioned to emerge by combining emerging technologies, as AI-enabled synthetic biology potentially scales up bioengineering into industrial biomanufacturing. However, the literature review indicates that goals such as maintaining a reasonable scope for innovation, or more ambitiously to foster a huge bioeconomy do not necessarily contrast with biosafety, but need to go hand in hand. This paper presents a literature review of the issues and describes emerging frameworks for policy and practice that transverse the options of command-and-control, stewardship, bottom-up, and laissez-faire governance. How to achieve early warning systems that enable prevention and mitigation of future AI-enabled biohazards from the lab, from deliberate misuse, or from the public realm, will constantly need to evolve, and adaptive, interactive approaches should emerge. Although biorisk is subject to an established governance regime, and scientists generally adhere to biosafety protocols, even experimental, but legitimate use by scientists could lead to unexpected developments. Recent advances in chatbots enabled by generative AI have revived fears that advanced biological insight can more easily get into the hands of malignant individuals or organizations. Given these sets of issues, society needs to rethink how AI-enabled synthetic biology should be governed. The suggested way to visualize the challenge at hand is whack-a-mole governance, although the emerging solutions are perhaps not so different either.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    越来越多的基于人工智能(AI)的系统在心脏病学中被提出和开发,由于处理大量临床和影像学数据的需求日益增加,最终目的是推进患者护理,诊断和预后。然而,人工智能工具的开发和临床部署之间存在关键差距。将AI工具应用到现实临床实践中的一个关键考虑因素是最终用户的“可信度”。即,我们必须确保人工智能系统能够被所有相关方信任和采用,包括临床医生和患者。在这里,我们提供了开发“值得信赖的AI系统”所涉及的概念的摘要。“我们描述了人工智能应用的主要风险和潜在的缓解技术,以便在心血管成像领域广泛应用这些有前途的技术。最后,我们展示了为什么值得信赖的人工智能概念是人工智能发展的重要支配力量。
    A growing number of artificial intelligence (AI)-based systems are being proposed and developed in cardiology, driven by the increasing need to deal with the vast amount of clinical and imaging data with the ultimate aim of advancing patient care, diagnosis and prognostication. However, there is a critical gap between the development and clinical deployment of AI tools. A key consideration for implementing AI tools into real-life clinical practice is their \"trustworthiness\" by end-users. Namely, we must ensure that AI systems can be trusted and adopted by all parties involved, including clinicians and patients. Here we provide a summary of the concepts involved in developing a \"trustworthy AI system.\" We describe the main risks of AI applications and potential mitigation techniques for the wider application of these promising techniques in the context of cardiovascular imaging. Finally, we show why trustworthy AI concepts are important governing forces of AI development.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号