Fairness

公平性
  • 文章类型: Journal Article
    The purpose of a first-in-human (FIH) clinical trial is to gather information about how the drug or device affects and interacts with the human body: its safety, side effects, and (potential) dosage. As such, the primary goal of a FIH trial is not participant benefit but to gain knowledge of drug or device efficacy, i.e., baseline human safety knowledge. Some FIH clinical trials carry significant foreseeable risk to participants with little to no foreseeable participant benefit. Participation in such trials would be a bad deal for participants, and the research is considered justifiable because of the promise of significant potential social benefit. I argue that there is an ethical tension inherent in risky FIH research and that researchers should fairly compensate risky FIH trial participants. This does not make the risk-benefit outcome more favorable for participants; rather, it amounts to a collective reckoning with the ethical tension inherent in the research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Letter
    暂无摘要。
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在美容皮肤病学的发展中,患者对程序的需求有所增加。这已经被加油了,在某种程度上,通过社交媒体和化妆品增强的日益正常化;然而,这导致一些患者有潜在的不切实际的期望,对皮肤科医生施加不适当的压力,以满足这些通常无法实现的要求。被视为困难的患者进一步加剧了这种压力,要求,耗时,可能需要广泛的咨询。医生可以采用动态或差别定价策略来抵消这些患者所需的额外时间和精力。我们讨论了围绕化妆品领域这些定价策略的道德问题,强调定价透明度的重要性,并提供建议,以促进化妆品皮肤病学实践的清晰度和公平性。
    Patient demand for procedures has increased in the evolving landscape of cosmetic dermatology. This has been fueled, in part, by social media and the growing normalization of cosmetic enhancements; however, this has led some patients to have potentially unrealistic expectations, placing undue pressure on dermatologists to meet these often unrealizable demands. This pressure is further exacerbated by patients who are seen as difficult, demanding, and time-consuming and who may require extensive counseling. Physicians may adopt dynamic or differential pricing strategies to offset the additional time and effort these patients require. We discuss the ethical concerns surrounding these pricing strategies in the cosmetic sphere, highlight the importance of transparency in pricing, and offer suggestions to promote clarity and fairness in cosmetic dermatology practices.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:共享和公平是重要的亲社会行为,可帮助我们在社交世界中导航。然而,关于威廉姆斯综合症(WS)的个体如何以及是否参与这些行为知之甚少。具有WS的个体的独特表型,由高社会动机和有限的社会认知组成,与典型的发展(TD)个人相比,还可以深入了解社会动机在共享和公平中的作用。当前的研究使用既定的实验范式来检查具有WS和TD个体的个体的共享和公平性。
    方法:我们在两个实验任务中比较了WS患者与TD儿童(6岁)的样本,这些样本与智力年龄(MA)相匹配:独裁者游戏(DG,实验1,N=17WS,20TD),成年人建模给出用于测试共享和不平等游戏的行为(IG,实验2,N=14WS,17TD)用于测试公平性。
    结果:结果表明,在DG和IG的基线给予方面,WS组的行为与TD组相似,拒绝不利的提议,但接受有利的提议。然而,在观看成人模型给出行为后,WS组给出的比基线更多,许多人付出了一半以上,而TD组给予较少。综合这些结果,表明社会动机足以分享,特别是,慷慨的分享,以及以自我为中心的公平形式。Further,患有WS的人似乎既能学会更慷慨,又能预防不利的结果,比以前已知的更复杂的轮廓。
    结论:结论:本研究提供了WS中共享和公平相关行为的快照,有助于我们理解与这种发育障碍相关的有趣的社会行为表型。
    BACKGROUND: Sharing and fairness are important prosocial behaviors that help us navigate the social world. However, little is known about how and whether individuals with Williams Syndrome (WS) engage in these behaviors. The unique phenotype of individuals with WS, consisting of high social motivation and limited social cognition, can also offer insight into the role of social motivation in sharing and fairness when compared to typically developing (TD) individuals. The current study used established experimental paradigms to examine sharing and fairness in individuals with WS and TD individuals.
    METHODS: We compared a sample of patients with WS to TD children (6-year-olds) matched by mental age (MA) on two experimental tasks: the Dictator Game (DG, Experiment 1, N = 17 WS, 20 TD) with adults modeling giving behaviors used to test sharing and the Inequity Game (IG, Experiment 2, N = 14 WS, 17 TD) used to test fairness.
    RESULTS: Results showed that the WS group behaved similarly to the TD group for baseline giving in the DG and in the IG, rejecting disadvantageous offers but accepting advantageous ones. However, after viewing an adult model giving behavior, the WS group gave more than their baseline, with many individuals giving more than half, while the TD group gave less. Combined these results suggest that social motivation is sufficient for sharing and, in particular, generous sharing, as well as the self-focused form of fairness. Further, individuals with WS appear capable of both learning to be more generous and preventing disadvantageous outcomes, a more complex profile than previously known.
    CONCLUSIONS: In conclusion, the present study provides a snapshot into sharing and fairness-related behaviors in WS, contributing to our understanding of the intriguing social-behavioral phenotype associated with this developmental disorder.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    资源的公平分配一直是人类关注的核心问题,促使人们对推动追求分配正义的各种动机进行广泛的研究。与最基本的动机之一相反,厌恶不平等,有人提出了一个相互矛盾的动机:等级反转厌恶。然而,目前尚不清楚在存在自我排名的情况下,这种排名逆转厌恶是否持续存在。在这里,我们提供了在第一方背景下等级逆转厌恶的证据,并探索了多种道德分配策略。在涉及55名在线招募参与者的重新分配游戏的修改版本中,只有当一个人的等级保持不变时,我们才观察到等级反转厌恶。当参与者的自我排名被改变时,他们倾向于将自己的行为建立在新的队伍上。这种行为倾向因个体而异,揭示了三种不同的道德策略,所有这些都纳入了秩反转的考虑。我们的发现表明,秩反转厌恶确实会影响一个人的分布行为,尽管其影响程度可能因个人而异,尤其是当自我排名是一个因素时。这些见解可以扩展到政治和经济领域,有助于更深入地理解分配正义的根本机制。
    The equitable allocation of resources has long been a central concern for humanity, prompting extensive research into various motivations that drive the pursuit of distributive justice. In contrast to one of the most fundamental motives, inequality aversion, a conflicting motive has been proposed: rank-reversal aversion. However, it remains unclear whether this rank-reversal aversion persists in the presence of self-rank. Here we provide evidence of rank-reversal aversion in the first-party context and explore diverse moral strategies for distribution. In a modified version of the redistribution game involving 55 online-recruited participants, we observed rank-reversal aversion only when one\'s rank was maintained. When participants\' self-rank was altered, they tended to base their behavior on their new ranks. This behavioral tendency varied among individuals, revealing three distinct moral strategies, all incorporating considerations of rank-reversal. Our findings suggest that rank-reversal aversion can indeed influence one\'s distribution behavior, although the extent of its impact may vary among individuals, especially when self-rank is a factor. These insights can be extended to political and economic domains, contributing to a deeper understanding of the underlying mechanisms of distributive justice.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    经济决策在个人和国家利益中都起着至关重要的作用。个人在经济决策中具有公平偏好,但是提案人的道德相关信息可能会影响公平考虑。在先前的ERP研究中,研究人员认为,道德认同会影响最后通牒博弈(UG)中的公平偏好,但是结果有差异。此外,是否榜样(其他人希望帮助决定合适行为的个人),谁能调节人们的道德标准,UG中可能影响公平的担忧仍未得到充分研究。为了解决这些问题,我们通过剔除那些具有非法信息的陈述来选择道德相关陈述,并利用ERP技术来探索提案人的角色模型和道德相关行为的相互作用是否影响了改良UG中的公平处理以及相应的神经机制.我们主要发现,上述对UG中提案考虑因素的交互作用可以反映在拒绝率和P300变化中。结果表明,提出者的角色模型与道德行为之间的相互作用可以调节UG中的公平关注。我们目前的工作为阐明在复杂的社会环境中公平分配的影响机制的时间过程提供了新的途径。
    Economic decision-making plays a paramount role in both individual and national interests. Individuals have fairness preferences in economic decision-making, but a proposer\'s moral-related information may affect fairness considerations. In prior ERP studies, researchers have suggested moral identity influences fairness preferences in the Ultimatum Game (UG), but there are discrepancies in the results. Furthermore, whether role models (individuals whom someone else looks to help decide suitable behaviors), who can modulate people\'s moral standards, can affect fairness concerns in UG is still understudied. To address the questions, we selected the moral-related statements by eliminating those with illegal information and employed the ERP technique to explore whether the interplay of the proposer\'s role model and moral-related behavior influenced fairness processing in the modified UG and the corresponding neural mechanisms. We mainly found that the aforementioned interaction effect on proposal considerations in UG could be mirrored in both rejection rates and P300 variations. The results demonstrate that the interaction between the proposer\'s role model and moral behavior can modulate fairness concerns in UG. Our current work provides new avenues for elucidating the time course of the influencing mechanism of fair distributions in complicated social environments.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    机器学习(ML)算法被认为是在数字医疗保健中实现辅助系统的有希望的解决方案,由于它们能够检测人类不易感知的细粒度模式。然而,ML算法也受到了批评,因为他们根据人口统计区别对待个体,从而扩大现有的差距。本文探讨了基于语音的ML算法中的性别和种族偏见,该算法可检测行为和心理健康结果。
    本文研究了用于训练ML的数据中潜在的偏差来源,包括从语音信号和相关标签中提取的声学特征,以及在ML决策中。本文进一步研究了通过使用一个人的人口统计信息中信息最少的功能作为ML输入来减少现有偏见的方法,并以对抗性方式转换特征空间,以减少人口统计信息的证据,同时保留有关重点行为和心理健康状态的信息。
    结果分为两个域,在估计焦虑水平时,第一个涉及性别和种族偏见,第二个与抑郁症检测中的性别偏见有关。研究结果表明,人口统计学群体之间的声学特征和标签存在统计学上的显着差异,以及组间不同的ML性能。在ML决策中部分保留了标签空间中存在的统计上显著的差异。尽管注意到不同人口群体的ML性能存在差异,关于模型准确估计敏感组医疗保健结果的能力,结果喜忧参半。
    这些发现强调了在开发ML模型时进行仔细和周到的设计的必要性,这些模型能够维护数据的关键方面,并在数字医疗保健应用中的所有人群中有效执行。
    UNASSIGNED: Machine learning (ML) algorithms have been heralded as promising solutions to the realization of assistive systems in digital healthcare, due to their ability to detect fine-grain patterns that are not easily perceived by humans. Yet, ML algorithms have also been critiqued for treating individuals differently based on their demography, thus propagating existing disparities. This paper explores gender and race bias in speech-based ML algorithms that detect behavioral and mental health outcomes.
    UNASSIGNED: This paper examines potential sources of bias in the data used to train the ML, encompassing acoustic features extracted from speech signals and associated labels, as well as in the ML decisions. The paper further examines approaches to reduce existing bias via using the features that are the least informative of one\'s demographic information as the ML input, and transforming the feature space in an adversarial manner to diminish the evidence of the demographic information while retaining information about the focal behavioral and mental health state.
    UNASSIGNED: Results are presented in two domains, the first pertaining to gender and race bias when estimating levels of anxiety, and the second pertaining to gender bias in detecting depression. Findings indicate the presence of statistically significant differences in both acoustic features and labels among demographic groups, as well as differential ML performance among groups. The statistically significant differences present in the label space are partially preserved in the ML decisions. Although variations in ML performance across demographic groups were noted, results are mixed regarding the models\' ability to accurately estimate healthcare outcomes for the sensitive groups.
    UNASSIGNED: These findings underscore the necessity for careful and thoughtful design in developing ML models that are capable of maintaining crucial aspects of the data and perform effectively across all populations in digital healthcare applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    2030年可持续发展目标(SDG)议程致力于“确保没有人掉队”。在当今关于移民治理和融合的高度两极化的政治话语中,应用非公民和国际移民的健康权具有挑战性。我们探讨了优先级设置方法的作用,以帮助更好地支持,在移民健康方面制定更公平、更透明的政策。确定优先事项的方法还必须纳入移徙保健,以更有效和公平地分配稀缺资源。明确承认权衡取舍是战略规划的一部分,会在危机期间规避临时决策,不适合公平。围绕扩大向移民或移民分组提供服务的决定进行的讨论,哪些服务和对谁应该是透明和公平的。我们得出的结论是,通过与决策者在逐步实现移民健康方面面临的实际挑战更加紧密地保持一致,确定优先事项的方法可以帮助更好地为政策制定提供信息。
    The 2030 Sustainable Development Goals (SDG) agenda has committed to \'ensuring that no one is left behind\'. Applying the right to health of non-citizens and international migrants is challenging in today\'s highly polarized political discourse on migration governance and integration. We explore the role of a priority setting approach to help support better, fairer and more transparent policy making in migration health. A priority setting approach must also incorporate migration health for more efficient and fair allocation of scarce resources. Explicitly recognizing the trade-offs as part of strategic planning, would circumvent ad hoc decision-making during crises, not well-suited for fairness. Discussions surrounding decisions about expanding services to migrants or subgroups of migrants, which services and to whom should be transparent and fair. We conclude that a priority setting approach can help better inform policy making by being more closely aligned with the practical challenges policy makers face towards the progressive realization of migration health.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人工智能现在是日常决策不可或缺的一部分,协助我们进行常规和高风险的选择。这些AI模型经常从人类行为中学习,假设这些训练数据是无偏见的。然而,我们报告了五项研究,表明人们改变他们的行为,向人工智能灌输所需的常规,表明这个假设是无效的。为了展示这种行为转变,我们招募参与者玩最后通牒游戏,他们被要求决定是否接受其他人类参与者或人工智能提出的货币分割建议。一些参与者被告知他们的选择将被用来训练一名人工智能提案人,而其他人没有收到这些信息。在五个实验中,我们发现人们改变他们的行为来训练人工智能做出公平的建议,无论他们是否可以直接从AI培训中受益。完成此任务一次后,参与者被邀请再次完成这项任务,但被告知他们的回答不会用于人工智能培训。以前训练过人工智能的人坚持这种行为转变,表明新的行为习惯已经成为习惯。这项工作表明,使用人类行为作为训练数据的后果比以前想象的要多,因为它可以导致人工智能延续人类偏见,并导致人们形成偏离他们通常行为的习惯。因此,这项工作强调了AI算法的一个问题,该算法旨在学习人类偏好的无偏见表示。
    AI is now an integral part of everyday decision-making, assisting us in both routine and high-stakes choices. These AI models often learn from human behavior, assuming this training data is unbiased. However, we report five studies that show that people change their behavior to instill desired routines into AI, indicating this assumption is invalid. To show this behavioral shift, we recruited participants to play the ultimatum game, where they were asked to decide whether to accept proposals of monetary splits made by either other human participants or AI. Some participants were informed their choices would be used to train an AI proposer, while others did not receive this information. Across five experiments, we found that people modified their behavior to train AI to make fair proposals, regardless of whether they could directly benefit from the AI training. After completing this task once, participants were invited to complete this task again but were told their responses would not be used for AI training. People who had previously trained AI persisted with this behavioral shift, indicating that the new behavioral routine had become habitual. This work demonstrates that using human behavior as training data has more consequences than previously thought since it can engender AI to perpetuate human biases and cause people to form habits that deviate from how they would normally act. Therefore, this work underscores a problem for AI algorithms that aim to learn unbiased representations of human preferences.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    机器学习(ML)中的公平性成为一个关键问题,因为人工智能系统越来越多地影响社会的各个方面。从医疗决策到法律判决。许多研究表明ML结果不公平。然而,当前的文献缺乏一种经过统计验证的方法来评估已部署的ML算法对数据集的公平性。本研究引入了一种基于k折交叉验证和统计t检验的新评估方法来评估ML算法的公平性。此方法使用六个经典ML算法在五个基准数据集中进行。考虑到当前文献指导的四个公平ML定义,我们的分析表明,相同的数据集对一种ML算法产生公平的结果,但对另一种算法产生不公平的结果.这样的观察揭示了复杂的,ML中依赖于上下文的公平问题,底层机器学习模型的各种运行机制进一步复杂化了。我们提出的方法使研究人员能够检查针对数据集中的受保护属性部署任何ML算法是否公平。我们还讨论了拟议方法的更广泛的含义,突出了其公平结果的显著可变性。我们的讨论强调了对适应性公平性定义的需求,以及对增强集成方法公平性的方法的探索,旨在推进公平的ML实践,并确保跨社会部门的公平AI部署。
    Fairness in machine learning (ML) emerges as a critical concern as AI systems increasingly influence diverse aspects of society, from healthcare decisions to legal judgments. Many studies show evidence of unfair ML outcomes. However, the current body of literature lacks a statistically validated approach that can evaluate the fairness of a deployed ML algorithm against a dataset. A novel evaluation approach is introduced in this research based on k-fold cross-validation and statistical t-tests to assess the fairness of ML algorithms. This approach was exercised across five benchmark datasets using six classical ML algorithms. Considering four fair ML definitions guided by the current literature, our analysis showed that the same dataset generates a fair outcome for one ML algorithm but an unfair result for another. Such an observation reveals complex, context-dependent fairness issues in ML, complicated further by the varied operational mechanisms of the underlying ML models. Our proposed approach enables researchers to check whether deploying any ML algorithms against a protected attribute within datasets is fair. We also discuss the broader implications of the proposed approach, highlighting a notable variability in its fairness outcomes. Our discussion underscores the need for adaptable fairness definitions and the exploration of methods to enhance the fairness of ensemble approaches, aiming to advance fair ML practices and ensure equitable AI deployment across societal sectors.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号