grasping

抓取
  • 文章类型: Journal Article
    在三维物体上选择合适的抓握是一个具有挑战性的视觉运动计算,这涉及组合有关对象的信息(例如,它的形状,尺寸,和质量)与有关演员身体的信息(例如,最佳的抓握光圈和手部姿势,以实现舒适的操纵)。在这里,我们使用功能磁共振成像来研究在抓握计划和执行过程中与这些不同方面相关的大脑网络。任何性别的人类参与者都可以观看,然后在木材和/或黄铜制成的L形物体上执行预先选择的抓握。通过利用准确预测人类抓取位置的计算方法,我们选择了解开多个与抓握相关的因素的作用的抓握点:抓握轴,抓大小,和物体质量。代表性相似性分析表明,在抓取计划过程中,抓取轴沿着背流区域编码。抓具大小在抓具规划期间首先在腹侧区域编码,然后在抓取执行期间在运动前区域。物体质量仅在抓取执行期间被编码在腹侧流和(前)运动区域中。运动前区域进一步编码了对抓握舒适度的视觉预测,而腹侧流编码在执行过程中抓住舒适度,表明它参与触觉评估。神经表征的这些变化因此捕获了允许人类抓住物体的感觉运动转换。重要性状态抓取需要将对象属性与在手和手臂姿势上的约束集成在一起。使用一种计算方法,通过组合这些约束来准确预测人类抓握位置,我们选择了对物体质量相对贡献的理解,抓大小,在神经影像学研究中,在把握计划和执行过程中把握轴。我们的发现揭示了背流视觉运动区域在抓握计划中的更大作用,令人惊讶的是,在执行期间增加腹侧流参与。我们建议在规划期间,视觉运动表示最初编码抓握轴和大小。相反,随着手接近物体,物体材料属性的感知表示变得更加相关,并且通过估计成功提起物体所需的抓地力来完善电机程序。
    Selecting suitable grasps on three-dimensional objects is a challenging visuomotor computation, which involves combining information about an object (e.g., its shape, size, and mass) with information about the actor\'s body (e.g., the optimal grasp aperture and hand posture for comfortable manipulation). Here, we used functional magnetic resonance imaging to investigate brain networks associated with these distinct aspects during grasp planning and execution. Human participants of either sex viewed and then executed preselected grasps on L-shaped objects made of wood and/or brass. By leveraging a computational approach that accurately predicts human grasp locations, we selected grasp points that disentangled the role of multiple grasp-relevant factors, that is, grasp axis, grasp size, and object mass. Representational Similarity Analysis revealed that grasp axis was encoded along dorsal-stream regions during grasp planning. Grasp size was first encoded in ventral stream areas during grasp planning then in premotor regions during grasp execution. Object mass was encoded in ventral stream and (pre)motor regions only during grasp execution. Premotor regions further encoded visual predictions of grasp comfort, whereas the ventral stream encoded grasp comfort during execution, suggesting its involvement in haptic evaluation. These shifts in neural representations thus capture the sensorimotor transformations that allow humans to grasp objects.SIGNIFICANCE STATEMENT Grasping requires integrating object properties with constraints on hand and arm postures. Using a computational approach that accurately predicts human grasp locations by combining such constraints, we selected grasps on objects that disentangled the relative contributions of object mass, grasp size, and grasp axis during grasp planning and execution in a neuroimaging study. Our findings reveal a greater role of dorsal-stream visuomotor areas during grasp planning, and, surprisingly, increasing ventral stream engagement during execution. We propose that during planning, visuomotor representations initially encode grasp axis and size. Perceptual representations of object material properties become more relevant instead as the hand approaches the object and motor programs are refined with estimates of the grip forces required to successfully lift the object.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本文的主要动机是为人与机器人之间的对象转移提供一种兼容且经济高效的解决方案。本研究的应用前景是制造业中的机器人-人协作。为了实现上述目标,提出了一种新型的模块化3轴力传感器,用于抓取系统,以实现交互式力传感。符合对象转移控制策略,由增量力控制模式和重力平衡控制模式组成,提出了用于人与机器人之间的对象转移。制作了安装在所提出的模块化3轴力传感器上的欠驱动抓取系统的原型,以研究所提出的交互式控制策略的有效性。实验结果表明,增量力控制模式适用于较轻的物体,具有较高的交互灵敏度。为了转移较重的物体,重力平衡控制模式更适合。在重力平衡控制模式下,人的手可以与物体达到准静态平衡,并实现合规的转移操作。由于上述特点,所提出的控制策略有可能提高人-机器人对象转移过程中对象转移的合规性和安全性。
    The primary motivation of this paper is to present a compliant and cost-effective solution for object transfer between human and robot. The application prospect of this study is robot-human collaboration in manufacturing. To achieve above goals, a novel modular 3-axis force sensor is proposed for the grasping system to achieve interactive force sensing. Compliant object transfer control strategy, which is composed of incremental force control mode and gravity balance control mode, is proposed for object transfer between human and robot. A prototype of underactuated grasping system which is mounted on the proposed modular 3-axis force sensor is fabricated to investigate the effectiveness of the proposed interactive control strategy. Experimental results reveal that the incremental force control mode is suitable for the lighter objects with a higher interactive sensitivity. For transferring heavier objects, the gravity balance control mode is more suitable. In gravity balance control mode, the human hand could achieve a quasi-static equilibrium with the object, and achieve a compliant transfer operation. Due to the above characteristic, the proposed control strategy has the potentials to enhance the object transfer compliance and safety in the human-robot object transfer process.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    获得准确的深度信息是机器人抓取任务的关键。然而,对于透明对象,由于物体的折射和反射特性,RGB-D相机很难感知它们。这种特性使得人形机器人难以感知和掌握日常的透明物体。为了补救这一点,现有的研究通常使用从剩余的不透明区域学习模式的模型来去除透明对象区域,以便可以完成深度估计。值得注意的是,这经常导致偏离真相。在这项研究中,我们提出了一种新的深度完井方法[即,ClueDepthGrasp(CDGrasp)],更有效地处理RGB-D图像中的透明对象。具体来说,我们提出了一个ClueDepth模块,它利用几何方法来滤除折射和反射点,同时保留正确的深度,从而为物体定位提供关键的位置线索。为了获得足够的特征来完成深度图,我们设计了一个DenseFormer网络,该网络集成了DenseNet来提取局部特征和swin-transformer块以获取所需的全局信息。此外,为了充分利用从多模态视觉地图中获得的信息,我们设计了一个多模态U-Net模块来捕获多尺度特征。在ClearGrasp数据集上进行的大量实验表明,我们的方法在透明对象的深度完成的准确性和泛化方面实现了最先进的性能,仿人机器人抓取能力的成功运用验证了我们提出的方法的有效性。
    Obtaining accurate depth information is key to robot grasping tasks. However, for transparent objects, RGB-D cameras have difficulty perceiving them owing to the objects\' refraction and reflection properties. This property makes it difficult for humanoid robots to perceive and grasp everyday transparent objects. To remedy this, existing studies usually remove transparent object areas using a model that learns patterns from the remaining opaque areas so that depth estimations can be completed. Notably, this frequently leads to deviations from the ground truth. In this study, we propose a new depth completion method [i.e., ClueDepth Grasp (CDGrasp)] that works more effectively with transparent objects in RGB-D images. Specifically, we propose a ClueDepth module, which leverages the geometry method to filter-out refractive and reflective points while preserving the correct depths, consequently providing crucial positional clues for object location. To acquire sufficient features to complete the depth map, we design a DenseFormer network that integrates DenseNet to extract local features and swin-transformer blocks to obtain the required global information. Furthermore, to fully utilize the information obtained from multi-modal visual maps, we devise a Multi-Modal U-Net Module to capture multiscale features. Extensive experiments conducted on the ClearGrasp dataset show that our method achieves state-of-the-art performance in terms of accuracy and generalization of depth completion for transparent objects, and the successful employment of a humanoid robot grasping capability verifies the efficacy of our proposed method.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    由于平衡力的问题,在非结构化环境中抓取不同物体的需求不断增加,对现有的软/刚性机器人手指提出了严峻的挑战,合规,和稳定性,因此催生了几种混合设计。这些混合设计利用刚性和柔性结构的优点,并显示出更好的性能,但是他们仍然遭受狭窄的输出力范围,有限的合规性,很少报道稳定性。由于其具有柔性切换多个姿势的刚软耦合结构,人的手指,作为一个优秀的混合设计,显示宽范围的输出力,优秀的合规性,和稳定性。受人类手指的启发,我们提出了一种具有多种模式和姿势的混合手指,由并联的软执行器(SA)和刚性执行器(RA)耦合。由基于气动的刚软协作策略形成的多种致动模式可以选择性地实现RA的高力和SA的柔度,而从特殊设计的欠驱动RA骨架中获得的多个姿势可以灵活地随着任务切换,从而实现高度合规。这种混合手指也被证明在外部刺激或重力下高度稳定。此外,我们将这些手指模块化并配置成一系列具有出色抓握性能的夹持器,例如,广泛的可抓握物体范围(从0.1克薯片到420克两指夹持器的27公斤哑铃),高顺应性(耐受94%夹持器跨度尺寸和4厘米偏移的物体),和高稳定性。我们的研究强调了融合刚软技术用于机器人开发的潜力,并可能影响未来的仿生学和高性能机器人的发展。
    The increasing demand for grasping diverse objects in unstructured environments poses severe challenges to the existing soft/rigid robotic fingers due to the issues in balancing force, compliance, and stability, and hence has given birth to several hybrid designs. These hybrid designs utilize the advantages of rigid and soft structures and show better performance, but they are still suffering from narrow output force range, limited compliance, and rarely reported stability. Owing to its rigid-soft coupling structure with flexible switched multiple poses, human finger, as an excellent hybrid design, shows wide-range output force, excellent compliance, and stability. Inspired by human finger, we propose a hybrid finger with multiple modes and poses, coupled by a soft actuator (SA) and a rigid actuator (RA) in parallel. The multiple actuation modes formed by a pneumatic-based rigid-soft collaborative strategy can selectively enable the RA\'s high force and SA\'s softness, whereas the multiple poses derived from the specially designed underactuated RA skeleton can be flexibly switched with tasks, thus achieving high compliance. Such hybrid fingers also proved to be highly stable under external stimuli or gravity. Furthermore, we modularize and configure these fingers into a series of grippers with excellent grasping performance, for example, wide graspable object range (diverse from 0.1 g potato chips to 27 kg dumbbells for a 420 g two-finger gripper), high compliance (tolerate objects with 94% gripper span size and 4 cm offset), and high stability. Our study highlights the potential of fusing rigid-soft technologies for robot development, and potentially impacts future bionics and high-performance robot development.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    虽然伸手和抓住是非常普遍的手动行动,神经影像学研究提供了证据,证明他们的神经表现可能在不同的身体部位之间共享,即,效应器。如果这些动作是由与效应器无关的机制引导的,类似的运动学应该观察到当行动是由手或由一个皮质远程和较少经验的效应器执行,比如脚。我们用两个特征性的动作成分检验了这一假设:到达的初始弹道阶段,以及在抓取过程中根据对象大小对数字进行预整形。我们通过要求参与者用手和脚伸手并抓住不同宽度的物体来检查这些运动学特征是否反映了与效应器无关的机制。首先,在伸手和抓住的过程中,达到在手和脚之间匹配的峰值速度的速度曲线,表示共享弹道加速阶段。第二,两个效应器的最大握力孔径和最大握力孔径随着物体尺寸的增加而增加,指示传输过程中对象大小的编码。在减速阶段和最大握力孔径的时间发现了手和脚之间的差异,可能是由于生物力学差异和参与者对足部动作缺乏经验。这些发现为跨身体部位的伸手和抓握的独立于效应器的视觉运动机制提供了证据。
    While reaching and grasping are highly prevalent manual actions, neuroimaging studies provide evidence that their neural representations may be shared between different body parts, i.e., effectors. If these actions are guided by effector-independent mechanisms, similar kinematics should be observed when the action is performed by the hand or by a cortically remote and less experienced effector, such as the foot. We tested this hypothesis with two characteristic components of action: the initial ballistic stage of reaching, and the preshaping of the digits during grasping based on object size. We examined if these kinematic features reflect effector-independent mechanisms by asking participants to reach toward and to grasp objects of different widths with their hand and foot. First, during both reaching and grasping, the velocity profile up to peak velocity matched between the hand and the foot, indicating a shared ballistic acceleration phase. Second, maximum grip aperture and time of maximum grip aperture of grasping increased with object size for both effectors, indicating encoding of object size during transport. Differences between the hand and foot were found in the deceleration phase and time of maximum grip aperture, likely due to biomechanical differences and the participants\' inexperience with foot actions. These findings provide evidence for effector-independent visuomotor mechanisms of reaching and grasping that generalize across body parts.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    Previous studies have shown that our perception of stimulus properties can be affected by the emotional nature of the stimulus. It is not clear, however, how emotions affect visually-guided actions toward objects. To address this question, we used toy rats, toy squirrels, and wooden blocks to induce negative, positive, and neutral emotions, respectively. Participants were asked to report the perceived distance and the perceived size of a target object resting on top of one of the three emotion-inducing objects; or to grasp the same target object either without visual feedback (open-loop) or with visual feedback (closed-loop) of both the target object and their grasping hand during the execution of grasping. We found that the target object was perceived closer and larger, but was grasped with a smaller grip aperture in the rat condition than in the squirrel and the wooden-block conditions when no visual feedback was available. With visual feedback present, this difference in grip aperture disappeared. These results showed that negative emotion influences both perceived size and grip aperture, but in opposite directions (larger perceived size but smaller grip aperture) and its influence on grip aperture could be corrected by visual feedback, which revealed different effects of emotion to perception and action. Our results have implications on the understanding of the relationship between perception and action in emotional condition, which showed the novel difference from previous theories.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    The proposal of postural synergy theory has provided a new approach to solve the problem of controlling anthropomorphic hands with multiple degrees of freedom. However, generating the grasp configuration for new tasks in this context remains challenging. This study proposes a method to learn grasp configuration according to the shape of the object by using postural synergy theory. By referring to past research, an experimental paradigm is first designed that enables the grasping of 50 typical objects in grasping and operational tasks. The angles of the finger joints of 10 subjects were then recorded when performing these tasks. Following this, four hand primitives were extracted by using principal component analysis, and a low-dimensional synergy subspace was established. The problem of planning the trajectories of the joints was thus transformed into that of determining the synergy input for trajectory planning in low-dimensional space. The average synergy inputs for the trajectories of each task were obtained through the Gaussian mixture regression, and several Gaussian processes were trained to infer the inputs trajectories of a given shape descriptor for similar tasks. Finally, the feasibility of the proposed method was verified by simulations involving the generation of grasp configurations for a prosthetic hand control. The error in the reconstructed posture was compared with those obtained by using postural synergies in past work. The results show that the proposed method can realize movements similar to those of the human hand during grasping actions, and its range of use can be extended from simple grasping tasks to complex operational tasks.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Collecting seafood animals (such as sea cucumbers, sea echini, scallops, etc.) cultivated in shallow water (water depth: ~30 m) is a profitable and an emerging field that requires robotics for replacing human divers. Soft robotics have several promising features (e.g., safe contact with the objects, lightweight, etc.) for performing such a task. In this paper, we implement a soft manipulator with an opposite-bending-and-extension structure. A simple and rapid inverse kinematics method is proposed to control the spatial location and trajectory of the underwater soft manipulator\'s end effector. We introduce the actuation hardware of the prototype, and then characterize the trajectory and workspace. We find that the prototype can well track fundamental trajectories such as a line and an arc. Finally, we construct a small underwater robot and demonstrate that the underwater soft manipulator successfully collects multiple irregular shaped seafood animals of different sizes and stiffness at the bottom of the natural oceanic environment (water depth: ~10 m).
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    The unique morphological bases of human hands, which are distinct from other primates, endow them with excellent grasping and manipulative abilities. However, the lack of understanding of human hand morphology and its parametric features is a major obstacle in the scientific design of prosthetic hands. Existing designs of prosthetic hand morphologies mostly adopt engineering-based methods, which depend on human experience, direct measurements of human hands, or numerical simulation/optimization. This paper explores for the first time a science-driven design method for prosthetic hand morphology, aiming to facilitate the development of prosthetic hands with human-level dexterity. We first use human morphological, movement, and postural data to quantitatively cognize general morphological characteristics of human hands in static, dynamic, functional, and non-functional perspectives. Taking these cognitions as bases, we develop a method able to quickly transfer human morphological parameters to prosthetic hands and endow the prosthetic hands with great grasping/manipulative potential at the same time. We apply this method to the design of an advanced prosthetic hand (called X-hand II) embedded with compact actuating systems. The human-size prosthetic hand can reach wide grasping/manipulative ranges close to those of human hands, replicate various daily grasping types and even execute dexterous in-hand manipulation. This science-driven method may also inspire other artificial limb and bionic robot designs.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    Small and manipulable objects (tools) preferentially evoke a network of brain regions relative to other objects, including the lateral occipitotemporal cortex (LOTC), which is assumed to process tool shape information. Given the correlation between various object properties, the exact type of information being represented in the LOTC remains debated. In three fMRI experiments, we examined the effects of multiple levels of shape (whole vs. object parts) and motor-related (grasping; manipulation) information. Combining representational similarity analysis and commonality analysis allowed us to partition the unique and shared effects of correlated dimensions. We found that grasping manner (for pickup), not the overall object shape or manner of manipulation, uniquely explained the LOTC neural activity pattern (Experiments 1 and 2). Experiment 3 tested tools composed of two parts to understand better how grasping manner was computed from object visual inputs. Support vector machine analysis revealed that the LOTC activity could decode different shapes of the tools\' handle parts but not the tools\' head parts. Together, these results suggest that the LOTC parses tool shapes by how it maps onto grasping programs; such parsing is not fully based on the whole-object shape but rather an interaction between the whole (where to grasp) and its parts (distinguishing the shape for the grasping part for specific grasping manners).
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

公众号