Visual geometry

  • 文章类型: Journal Article
    3-D中大型和复杂的连体树结构的鲁棒分割是计算机视觉中的主要挑战。在计算生物学中尤其如此,我们经常遇到大的数据结构,但是数量很少,这给学习算法带来了难题。我们证明了将多尺度开口与测地路径传播相结合,可以揭示这个经典的机器视觉挑战,同时通过开发无监督的视觉几何方法(数字拓扑/形态计量学)来规避学习问题。所提出的MSO-GP方法的新颖性来自于由联合结构的骨架引导的测地路径传播,这有助于在该领域的一项特别具有挑战性的任务中实现鲁棒的分割结果。非对比肺计算机断层扫描血管造影照片中的动脉-静脉分离。这是测量血管几何形状以诊断肺部疾病并开发基于图像的表型的重要的第一步。我们首先在合成数据上展示概念验证结果,然后验证在猪肺和人肺数据上的性能,与竞争方法相比,分割时间和用户干预需求更少。
    Robust segmentation of large and complex conjoined tree structures in 3-D is a major challenge in computer vision. This is particularly true in computational biology, where we often encounter large data structures in size, but few in number, which poses a hard problem for learning algorithms. We show that merging multiscale opening with geodesic path propagation, can shed new light on this classic machine vision challenge, while circumventing the learning issue by developing an unsupervised visual geometry approach (digital topology/morphometry). The novelty of the proposed MSO-GP method comes from the geodesic path propagation being guided by a skeletonization of the conjoined structure that helps to achieve robust segmentation results in a particularly challenging task in this area, that of artery-vein separation from non-contrast pulmonary computed tomography angiograms. This is an important first step in measuring vascular geometry to then diagnose pulmonary diseases and to develop image-based phenotypes. We first present proof-of-concept results on synthetic data, and then verify the performance on pig lung and human lung data with less segmentation time and user intervention needs than those of the competing methods.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    这项研究的目的是通过四个实验来验证一系列视觉刺激(线段)将被主观感知为视线或表面的条件。用主观评价线段的方法进行了两个实验,另外两个具有奥斯古德语义差异。我们分析了五个变量(厚度,type,定位,和颜色)可能负责线路分类。四个实验给出了类似的结果:变量厚度和类型的重要性更高;可变颜色的一般意义较低;可变方向的一般意义不大。有趣的是,对于变量类型,直线比曲线更频繁地被评估为表面,并被感知为几何形状,扁平,硬,静态,粗糙,锋利,绑定,酸,寒冷,男性,冷和被动。曲线被普遍评价为线,并归类为有机的,圆形,软,动态,蓬松,钝,免费,甜,感性的,女性化,温暖和活跃。这些结果突出了所考虑变量的感知特征的特异性,并证实了诸如厚度和类型之类的变量特征的相关性。
    The aim of this study is to verify the conditions under which a series of visual stimuli (line segments) will be subjectively perceived as visual lines or surfaces employing four experiments. Two experiments were conducted with the method of subjective evaluation of the line segments, and the other two with the Osgood semantic differential. We analysed five variables (thickness, type, orientation, and colour) potentially responsible for the lines\' categorisation. The four experiments gave similar results: higher importance of the variables thickness and type; general lower significance of the variable colour; and general insignificance of the variable orientation. Interestingly, for the variable type, straight lines are evaluated as surfaces more frequently than curved lines and perceived as geometrical, flat, hard, static, rough, sharp, bound, sour, frigid, masculine, cold and passive. Curved lines are prevalently evaluated as lines, and categorised as organic, rounded, soft, dynamic, fluffy, blunt, free, sweet, sensual, feminine, warm and active. These results highlight the specificity of perceptual characteristics for the considered variables and confirm the relevance of the characteristics of variables such as thickness and type.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    本文试图区分两种视觉空间模型。一种模型认为视觉空间是物理空间的简单仿射变换。另一个则提出,这是通过透视定律对物理空间的转换。本文报告了两个实验,要求参与者在距参与者五个不同距离处判断正方形内角的大小。基于视角的模型预测,最靠近参与者一侧的每个正方形内的角度应该看起来小于远侧的角度。在我们的条件下,简单仿射模型预测每个正方形的角度的感知大小应保持90°。两个实验的结果与基于视角的模型最一致。在两个实验中,对于所有五个正方形,近侧上的每个正方形的角度被估计为显著小于远侧上的角度。此外,每个正方形的四个角度的估计大小之和随着参与者到正方形的距离的增加而下降,除了最近的正方形之外,所有正方形的估计大小都小于360°。
    This paper attempts to differentiate between two models of visual space. One model suggests that visual space is a simple affine transformation of physical space. The other proposes that it is a transformation of physical space via the laws of perspective. The present paper reports two experiments in which participants are asked to judge the size of the interior angles of squares at five different distances from the participant. The perspective-based model predicts that the angles within each square on the side nearest to the participant should seem smaller than those on the far side. The simple affine model under our conditions predicts that the perceived size of the angles of each square should remain 90°. Results of both experiments were most consistent with the perspective-based model. The angles of each square on the near side were estimated to be significantly smaller than the angles on the far side for all five squares in both experiments. In addition, the sum of the estimated size of the four angles of each square declined with increasing distance from the participant to the square and was less than 360° for all but the nearest square.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    This paper describes a flexible camera calibration method using refined vanishing points without prior information. Vanishing points are estimated from human-made features like parallel lines and repeated patterns. With the vanishing points extracted from the three mutually orthogonal directions, the interior and exterior orientation parameters can be further calculated using collinearity condition equations. A vanishing point refinement process is proposed to reduce the uncertainty caused by vanishing point localization errors. The fine-tuning algorithm is based on the divergence of grouped feature points projected onto the reference plane, minimizing the standard deviation of each of the grouped collinear points with an O(1) computational complexity. This paper also presents an automated vanishing point estimation approach based on the cascade Hough transform. The experiment results indicate that the vanishing point refinement process can significantly improve camera calibration parameters and the root mean square error (RMSE) of the constructed 3D model can be reduced by about 30%.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

公众号