关键词: artificial intelligence deployment ethics participatory democracy robotics

Mesh : Humans Artificial Intelligence Robotics Ethical Theory

来  源:   DOI:10.1017/S0963180123000087

Abstract:
Current national and international guidelines for the ethical design and development of artificial intelligence (AI) and robotics emphasize ethical theory. Various governing and advisory bodies have generated sets of broad ethical principles, which institutional decisionmakers are encouraged to apply to particular practical decisions. Although much of this literature examines the ethics of designing and developing AI and robotics, medical institutions typically must make purchase and deployment decisions about technologies that have already been designed and developed. The primary problem facing medical institutions is not one of ethical design but of ethical deployment. The purpose of this paper is to develop a practical model by which medical institutions may make ethical deployment decisions about ready-made advanced technologies. Our slogan is \"more process, less principles.\" Ethically sound decisionmaking requires that the process by which medical institutions make such decisions include participatory, deliberative, and conservative elements. We argue that our model preserves the strengths of existing frameworks, avoids their shortcomings, and delivers its own moral, practical, and epistemic advantages.
摘要:
当前有关人工智能(AI)和机器人技术的道德设计和开发的国家和国际准则强调道德理论。各种管理和咨询机构产生了一系列广泛的道德原则,鼓励哪些机构决策者适用于特定的实际决策。尽管这些文献中的大部分研究了设计和开发人工智能和机器人技术的伦理,医疗机构通常必须对已经设计和开发的技术做出购买和部署决策。医疗机构面临的主要问题不是道德设计,而是道德部署。本文的目的是开发一种实用模型,通过该模型,医疗机构可以对现成的先进技术做出道德部署决策。我们的口号是“更多的过程,少原则。“道德上合理的决策要求医疗机构做出此类决策的过程包括参与性,审议,保守的元素。我们认为我们的模型保留了现有框架的优势,避免他们的缺点,传递自己的道德,实用,和认知优势。
公众号