关键词: artificial intelligence bioweapon development clinical pharmacology clinical toxicology dual-use dilemma large language models regulatory mechanisms

Mesh : Humans Pharmacology, Clinical Artificial Intelligence Language

来  源:   DOI:10.1111/bcp.15899

Abstract:
This paper aims to explore the possibility of employing large language models (LLMs) - a type of artificial intelligence (AI) - in clinical pharmacology, with a focus on its possible misuse in bioweapon development. Additionally, ethical considerations, legislation and potential risk reduction measures are analysed. The existing literature is reviewed to investigate the potential misuse of AI and LLMs in bioweapon creation. The search includes articles from PubMed, Scopus and Web of Science Core Collection that were identified using a specific protocol. To explore the regulatory landscape, the OECD.ai platform was used. The review highlights the dual-use vulnerability of AI and LLMs, with a focus on bioweapon development. Subsequently, a case study is used to illustrate the potential of AI manipulation resulting in harmful substance synthesis. Existing regulations inadequately address the ethical concerns tied to AI and LLMs. Mitigation measures are proposed, including technical solutions (explainable AI), establishing ethical guidelines through collaborative efforts, and implementing policy changes to create a comprehensive regulatory framework. The integration of AI and LLMs into clinical pharmacology presents invaluable opportunities, while also introducing significant ethical and safety considerations. Addressing the dual-use nature of AI requires robust regulations, as well as adopting a strategic approach grounded in technical solutions and ethical values following the principles of transparency, accountability and safety. Additionally, AI\'s potential role in developing countermeasures against novel hazardous substances is underscored. By adopting a proactive approach, the potential benefits of AI and LLMs can be fully harnessed while minimizing the associated risks.
摘要:
目的:本文旨在探索在临床药理学中采用大型语言模型(LLM)-一种人工智能(AI)的可能性,重点关注其在生物武器开发中可能的误用。此外,伦理考虑,立法,并分析了潜在的风险降低措施。
方法:对现有文献的全面回顾调查了人工智能和LLM在生物武器创作中的潜在误用。搜索内容包括来自PubMed,Scopus,以及使用特定协议识别的WebofScience核心集合。为了探索监管景观,经合组织。使用了AI平台。
结果:他回顾了AI和LLM的双重用途漏洞,专注于生物武器的发展。随后,一个案例研究用于说明AI操纵导致有害物质合成的潜力。现有法规不足以解决与AI和LLM相关的道德问题。提出了缓解措施,包括技术解决方案(可解释的人工智能),通过合作努力建立道德准则,并实施政策变更,以建立全面的监管框架。
结论:将AI和LLM整合到临床药理学中提供了宝贵的机会,同时也引入了重要的道德和安全考虑。解决人工智能的双重用途需要强有力的法规,以及采用基于技术解决方案和遵循透明度原则的道德价值观的战略方法,问责制,和安全。此外,强调了人工智能在制定针对新型有害物质的对策方面的潜在作用。通过采取积极主动的方法,可以充分利用AI和LLM的潜在优势,同时将相关风险降至最低。
公众号