hmax

HMAX
  • 文章类型: Journal Article
    背景技术脑性瘫痪(CP)是由发育中的大脑的损伤引起的神经发育病症。儿童CP无法执行精确,协调良好的运动,过度的肌肉共收缩或共激活是CP的一个突出属性。CP患者在自愿运动期间激动剂和拮抗剂之间的正常相互关系发生了改变。H反射,通常被认为是脊柱拉伸反射的电等效物,可用于检查整体的Re-ex弧,包括Ia感觉强度和脊髓运动神经元兴奋性状态。此外,已经发现振动对H反射的神经调节影响,越来越多的研究,以确定其作为脊髓反射兴奋性增强患者干预的潜在用途。我们的目标是通过研究自主运动(背屈)期间的SoleusH反射变化来确定运动缺陷的大脑机制,并确定振动在痉挛型CP儿童中H反射调节中的作用。方法对12例痉挛型CP患儿(10-16岁)和15例年龄匹配的对照组进行比目鱼H反射记录。录音是在休息时获得的,在背屈期间,以及在每个受试者的振动刺激期间。在对照组和病例(CP)之间比较H反应(Hmax振幅和Hmax与Mmax之比)。对于进行的实验,通过Wilcoxon符号秩检验.募集曲线描绘了平均H响应振幅随刺激强度增量的分布,通过两个样本的Kolmogorov-Smirnov(KS)检验,比较了对照组和病例之间的背屈和振动。P值<0.05被认为是统计学上显著的。结果Hmax振幅和Hmax与Mmax之比增加(增加15%和12.2%,分别)来自CP患儿的静息值(p<0.05),而对照组表现出下降(减少了62%和57%,分别)在背屈期间(p<0.05)。振动刺激在两组中的H反应测量值均呈下降趋势。CP儿童分别减少了约15%和16%,而对照组分别减少了24%和21%。通过双样本KS检验发现,在背屈和振动实验期间,对照组与CP相比,记录的募集曲线(平均H响应幅度与刺激强度的分布)差异具有统计学意义(p<0.0001)。结论自愿性拮抗肌激活过程中H-反射抑制的失败表明在痉挛型CP中存在受损的相互抑制。CP儿童中由振动刺激引起的相对适度的H反应降低提供了有限的证据证明CP中H反射的振动调节。需要对CP儿童运动异常的机制进行更多的研究,这可以帮助制定治疗计划。
    Introduction Cerebral palsy (CP) is a neurodevelopmental condition that results from an injury to a developing brain. Children with CP fail to execute precise, well-coordinated movements, and excessive muscular co-contraction or co-activation is a prominent attribute of CP. The normal reciprocal relationship between agonists and antagonists during voluntary movements is altered in patients with CP. H-reflex, which is often regarded as the electrical equivalent of the spinal stretch reflex, can be used to examine the overall reflex arc, including the Ia sensory afferent strength and the spinal motoneuron excitability state. Furthermore, neuromodulatory influence of vibration on H-reflex has been found, which has been increasingly investigated to ascertain its potential use as an intervention in patients with increased spinal reflex excitability. Our goal was to identify the brain mechanism underlying the motor deficits by studying Soleus H-reflex changes during voluntary movement (dorsiflexion) and also to determine the role of vibration in H-reflex modulation in children with spastic CP. Methods Soleus H-reflex was recorded in 12 children with spastic CP (10-16 years) and 15 age-matched controls. Recordings were obtained at rest, during dorsiflexion, and during vibratory stimulation for each subject. H-responses (Hmax amplitudes and Hmax-to-Mmax ratio) were compared among the controls and the cases (CP), for the experiments performed, by the Wilcoxon signed-rank test. The recruitment curves depicting the distribution of mean H-response amplitudes with stimulus intensity increment, for dorsiflexion and vibration were compared among controls and cases by the two-sample Kolmogorov-Smirnov (KS) test. p-value <0.05 was considered as statistically significant. Results Hmax amplitudes and the Hmax-to-Mmax ratio increased (15 % and 12.2 % increment, respectively) from the resting values in the children with CP (p<0.05), while controls exhibited a decrease (reduction of 62% and 57 %, respectively) during dorsiflexion (p<0.05). Vibratory stimulation produced a decreasing trend in H-response measures in both the groups. There was about 15 % and 16 % reduction respectively among children with CP while that of 24 % and 21 % respectively among the controls. The differences in the recruitment curves (distribution of average H-response amplitudes with stimulation intensity) recorded during dorsiflexion and vibration experiments among controls compared with those with CP were found to be statistically significant by the two-sample KS test (p<0.0001). Conclusion The failure of H-reflex suppression during voluntary antagonist muscle activation suggests the presence of impaired reciprocal inhibition in spastic CP. The relatively modest H-response reduction caused by vibratory stimulation in children with CP provides limited evidence of vibratory regulation of the H-reflex in CP. More research into the mechanisms driving motor abnormalities in children with CP is needed, which could aid in therapy planning.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    计算认知神经科学(CCN)领域构建和测试神经生物学详细的计算模型,这些模型同时考虑了行为和神经科学数据。本文利用了CCN的一个关键优势-即,应该有可能以即插即用的方式连接不同的CCN模型,以产生一种新的生物学详细的感知类别学习模型。新模型是从两个现有的CCN模型中创建的:视觉对象处理的HMAX模型和类别学习的COVIS模型。使用位图图像作为输入,只调整几个学习率参数,新的HMAX/COVIS模型提供了令人印象深刻的良好拟合人类类别学习数据来自两个定性不同的实验,使用不同类型的类别结构和不同类型的视觉刺激。总的来说,该模型提供了基底神经节介导的学习的全面的神经和行为说明。
    The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    In recent years, the interdisciplinary research between neuroscience and computer vision has promoted the development in both fields. Many biologically inspired visual models are proposed, and among them, the Hierarchical Max-pooling model (HMAX) is a feedforward model mimicking the structures and functions of V1 to posterior inferotemporal (PIT) layer of the primate visual cortex, which could generate a series of position- and scale- invariant features. However, it could be improved with attention modulation and memory processing, which are two important properties of the primate visual cortex. Thus, in this paper, based on recent biological research on the primate visual cortex, we still mimic the first 100-150 ms of visual cognition to enhance the HMAX model, which mainly focuses on the unsupervised feedforward feature learning process. The main modifications are as follows: (1) To mimic the attention modulation mechanism of V1 layer, a bottom-up saliency map is computed in the S1 layer of the HMAX model, which can support the initial feature extraction for memory processing; (2) To mimic the learning, clustering and short-term memory to long-term memory conversion abilities of V2 and IT, an unsupervised iterative clustering method is used to learn clusters with multiscale middle level patches, which are taken as long-term memory; (3) Inspired by the multiple feature encoding mode of the primate visual cortex, information including color, orientation, and spatial position are encoded in different layers of the HMAX model progressively. By adding a softmax layer at the top of the model, multiclass categorization experiments can be conducted, and the results on Caltech101 show that the enhanced model with a smaller memory size exhibits higher accuracy than the original HMAX model, and could also achieve better accuracy than other unsupervised feature learning methods in multiclass categorization task.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    非意外属性(NAP)对应于视点变化不变的图像属性(例如,直vs.弯曲的轮廓),并且与可以随着深度对象旋转而连续变化的度量属性(MP)(例如,纵横比,弯曲度,等。).形状加工的行为和电生理研究表明,与MP相比,对NAP差异的敏感性更高。然而,以前的工作表明,这种灵敏度是缺乏的多视图模型的对象识别,如Hmax。这些模型通常假设对象处理基于具有分布式对称钟形调谐的视图调谐神经元群体,这些神经元至少受到MP差异的调制,与NAP一样多。这里,我们检验了以下假设:对对象转换的不变性的无监督学习可能会增加对NAP与国会议员在Hmax。我们收集了一个视频序列数据库,其中对象在深度上缓慢旋转,试图模仿幼儿在早期发育阶段在对象操纵过程中观察到的序列。我们表明,无监督学习在更高的阶段产生形状调整,对国家行动方案的差异有更大的敏感性。国会议员同意猴子IT数据。一起,这些结果表明,更高的NAP敏感性可能来自于物体不同的深度旋转.
    Non-accidental properties (NAPs) correspond to image properties that are invariant to changes in viewpoint (e.g., straight vs. curved contours) and are distinguished from metric properties (MPs) that can change continuously with in-depth object rotation (e.g., aspect ratio, degree of curvature, etc.). Behavioral and electrophysiological studies of shape processing have demonstrated greater sensitivity to differences in NAPs than in MPs. However, previous work has shown that such sensitivity is lacking in multiple-views models of object recognition such as Hmax. These models typically assume that object processing is based on populations of view-tuned neurons with distributed symmetrical bell-shaped tuning that are modulated at least as much by differences in MPs as in NAPs. Here, we test the hypothesis that unsupervised learning of invariances to object transformations may increase the sensitivity to differences in NAPs vs. MPs in Hmax. We collected a database of video sequences with objects slowly rotating in-depth in an attempt to mimic sequences viewed during object manipulation by young children during early developmental stages. We show that unsupervised learning yields shape-tuning in higher stages with greater sensitivity to differences in NAPs vs. MPs in agreement with monkey IT data. Together, these results suggest that greater NAP sensitivity may arise from experiencing different in-depth rotations of objects.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    The ability to recognize objects in clutter is crucial for human vision, yet the underlying neural computations remain poorly understood. Previous single-unit electrophysiology recordings in inferotemporal cortex in monkeys and fMRI studies of object-selective cortex in humans have shown that the responses to pairs of objects can sometimes be well described as a weighted average of the responses to the constituent objects. Yet, from a computational standpoint, it is not clear how the challenge of object recognition in clutter can be solved if downstream areas must disentangle the identity of an unknown number of individual objects from the confounded average neuronal responses. An alternative idea is that recognition is based on a subpopulation of neurons that are robust to clutter, i.e., that do not show response averaging, but rather robust object-selective responses in the presence of clutter. Here we show that simulations using the HMAX model of object recognition in cortex can fit the aforementioned single-unit and fMRI data, showing that the averaging-like responses can be understood as the result of responses of object-selective neurons to suboptimal stimuli. Moreover, the model shows how object recognition can be achieved by a sparse readout of neurons whose selectivity is robust to clutter. Finally, the model provides a novel prediction about human object recognition performance, namely, that target recognition ability should show a U-shaped dependency on the similarity of simultaneously presented clutter objects. This prediction is confirmed experimentally, supporting a simple, unifying model of how the brain performs object recognition in clutter.
    UNASSIGNED: The neural mechanisms underlying object recognition in cluttered scenes (i.e., containing more than one object) remain poorly understood. Studies have suggested that neural responses to multiple objects correspond to an average of the responses to the constituent objects. Yet, it is unclear how the identities of an unknown number of objects could be disentangled from a confounded average response. Here, we use a popular computational biological vision model to show that averaging-like responses can result from responses of clutter-tolerant neurons to suboptimal stimuli. The model also provides a novel prediction, that human detection ability should show a U-shaped dependency on target-clutter similarity, which is confirmed experimentally, supporting a simple, unifying account of how the brain performs object recognition in clutter.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    描述了颞下皮层神经元的关键特性,然后,评估了在腹侧视觉系统中进行不变视觉物体识别的两种主要方法的生物学合理性,以调查它们是否解释了这些特性。实验1表明,与HMAX相比,VisNet使用随机样本执行对象分类。除了HMAX的最终C层神经元具有非常非稀疏的表示(与大脑中的表示不同),该表示在有关对象类的单神经元响应中提供了很少的信息。实验2表明,当用每个对象的不同视图训练时,VisNet形成不变表示,而HMAX在使用生物学上似是而非的模式关联网络进行评估时表现不佳,因为HMAX没有学习视图不变性的机制。实验3表明,VisNet神经元对人脸的乱序图像没有反应,从而对形状信息进行编码。HMAX神经元对未打乱和打乱的面孔做出了同样高的反应,表明包括纹理在内的低级特征可能与HMAX性能相关。实验4表明,即使对象提供的视图在转换时发生灾难性的变化,VisNet也可以学习识别对象,而HMAX在其S-C层次结构中没有提供视图不变学习的学习机制。这突出了对高级视觉的神经生物学机制的一些要求,以及一些不同的方法是如何执行的,以帮助理解腹侧视觉流中不变视觉对象识别的基本原理。
    Key properties of inferior temporal cortex neurons are described, and then, the biological plausibility of two leading approaches to invariant visual object recognition in the ventral visual system is assessed to investigate whether they account for these properties. Experiment 1 shows that VisNet performs object classification with random exemplars comparably to HMAX, except that the final layer C neurons of HMAX have a very non-sparse representation (unlike that in the brain) that provides little information in the single-neuron responses about the object class. Experiment 2 shows that VisNet forms invariant representations when trained with different views of each object, whereas HMAX performs poorly when assessed with a biologically plausible pattern association network, as HMAX has no mechanism to learn view invariance. Experiment 3 shows that VisNet neurons do not respond to scrambled images of faces, and thus encode shape information. HMAX neurons responded with similarly high rates to the unscrambled and scrambled faces, indicating that low-level features including texture may be relevant to HMAX performance. Experiment 4 shows that VisNet can learn to recognize objects even when the view provided by the object changes catastrophically as it transforms, whereas HMAX has no learning mechanism in its S-C hierarchy that provides for view-invariant learning. This highlights some requirements for the neurobiological mechanisms of high-level vision, and how some different approaches perform, in order to help understand the fundamental underlying principles of invariant visual object recognition in the ventral visual stream.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    暂无摘要。
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    The human visual system is assumed to transform low level visual features to object and scene representations via features of intermediate complexity. How the brain computationally represents intermediate features is still unclear. To further elucidate this, we compared the biologically plausible HMAX model and Bag of Words (BoW) model from computer vision. Both these computational models use visual dictionaries, candidate features of intermediate complexity, to represent visual scenes, and the models have been proven effective in automatic object and scene recognition. These models however differ in the computation of visual dictionaries and pooling techniques. We investigated where in the brain and to what extent human fMRI responses to short video can be accounted for by multiple hierarchical levels of the HMAX and BoW models. Brain activity of 20 subjects obtained while viewing a short video clip was analyzed voxel-wise using a distance-based variation partitioning method. Results revealed that both HMAX and BoW explain a significant amount of brain activity in early visual regions V1, V2, and V3. However, BoW exhibits more consistency across subjects in accounting for brain activity compared to HMAX. Furthermore, visual dictionary representations by HMAX and BoW explain significantly some brain activity in higher areas which are believed to process intermediate features. Overall our results indicate that, although both HMAX and BoW account for activity in the human visual system, the BoW seems to more faithfully represent neural responses in low and intermediate level visual areas of the brain.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    To improve robustness in object recognition, many artificial visual systems imitate the way in which the human visual cortex encodes object information as a hierarchical set of features. These systems are usually evaluated in terms of their ability to accurately categorize well-defined, unambiguous objects and scenes. In the real world, however, not all objects and scenes are presented clearly, with well-defined labels and interpretations. Visual illusions demonstrate a disparity between perception and objective reality, allowing psychophysicists to methodically manipulate stimuli and study our interpretation of the environment. One prominent effect, the Müller-Lyer illusion, is demonstrated when the perceived length of a line is contracted (or expanded) by the addition of arrowheads (or arrow-tails) to its ends. HMAX, a benchmark object recognition system, consistently produces a bias when classifying Müller-Lyer images. HMAX is a hierarchical, artificial neural network that imitates the \"simple\" and \"complex\" cell layers found in the visual ventral stream. In this study, we perform two experiments to explore the Müller-Lyer illusion in HMAX, asking: (1) How do simple vs. complex cell operations within HMAX affect illusory bias and precision? (2) How does varying the position of the figures in the input image affect classification using HMAX? In our first experiment, we assessed classification after traversing each layer of HMAX and found that in general, kernel operations performed by simple cells increase bias and uncertainty while max-pooling operations executed by complex cells decrease bias and uncertainty. In our second experiment, we increased variation in the positions of figures in the input images that reduced bias and uncertainty in HMAX. Our findings suggest that the Müller-Lyer illusion is exacerbated by the vulnerability of simple cell operations to positional fluctuations, but ameliorated by the robustness of complex cell responses to such variance.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    为了适当地响应对象,我们必须快速处理视觉输入并赋予它们意义。这涉及高度动态,信息积累和认知操作的交互式神经过程在多个时间尺度上得到解决。然而,目前还没有对象识别的模型,它提供了视觉和语义信息如何随着时间的推移而出现的综合说明;因此,它仍然不知道如何以及何时从视觉输入中唤起语义表示。这里,我们测试了基于视觉的HMax计算模型与语义特征信息相结合的单个对象模型是否可以解释和预测脑磁图记录的时变神经活动。我们表明,与单独的HMax相比,结合HMax和语义属性可以更好地说明神经对象表示,通过模型拟合和分类性能。我们的结果表明,通过添加超过200毫秒的语义特征信息,可以显着改善单个对象的建模和分类。这些结果为视觉处理的功能特性提供了重要的见解。
    To respond appropriately to objects, we must process visual inputs rapidly and assign them meaning. This involves highly dynamic, interactive neural processes through which information accumulates and cognitive operations are resolved across multiple time scales. However, there is currently no model of object recognition which provides an integrated account of how visual and semantic information emerge over time; therefore, it remains unknown how and when semantic representations are evoked from visual inputs. Here, we test whether a model of individual objects--based on combining the HMax computational model of vision with semantic-feature information--can account for and predict time-varying neural activity recorded with magnetoencephalography. We show that combining HMax and semantic properties provides a better account of neural object representations compared with the HMax alone, both through model fit and classification performance. Our results show that modeling and classifying individual objects is significantly improved by adding semantic-feature information beyond ∼200 ms. These results provide important insights into the functional properties of visual processing across time.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号