Scaling method

  • 文章类型: Journal Article
    速率依赖性粘附现象长期以来一直被认为是一个复杂的问题,迄今为止开发的基于物理学和力学的方法导致了粘附功与接触前沿速度之间隐含形式之间的分析关系,这在实践中很难实现。为了在球形压痕的框架内解决这个问题,引入标称点接触中的粘附松弛测试来估计速率依赖性粘附。基于接触半径随时间演变的拉伸指数近似,针对附着功与接触前速度之间的函数关系,提出了一个相对简单的四参数模型,并将其拟合性能与已知的Greenwood-Johnson和Persson-Brener模型进行比较。
    The phenomenon of rate-dependent adhesion has long been recognized as an intricate problem, and the so-far-developed physics and mechanics-based approaches resulted in analytical relations between the implicit form between the work of adhesion and the contact front velocity which are difficult to implement in practice. To address this issue in the framework of spherical indentation, the adhesion relaxation test in a nominal point contact is introduced to estimate the rate-dependent adhesion. Based on a stretched exponent approximation for the contact radius evolution with time, a relatively simple four-parameter model is proposed for the functional relation between the work of adhesion and the contact front velocity, and its fitting performance is compared to that of the known Greenwood-Johnson and Persson-Brener models.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    使用记录的数据模拟运动的工作流程通常从选择通用的肌肉骨骼模型并对其进行缩放以表示特定于受试者的特征开始。用文献中现有的缩放方法计算的肌腱参数模拟肌肉动力学,然而,与可衡量的结果相比,会产生一些不一致的地方。例如,用线性缩放参数模拟步行过程中的纤维长度和肌肉兴奋与文献中的既定模式不同。这项研究提供了一种工具,该工具利用已报告的体内实验观察结果来调整肌肉肌腱参数,并评估其在估计步行过程中肌肉兴奋和代谢成本方面的影响。从缩放的通用肌肉骨骼模型中,我们调整了最佳的纤维长度,肌腱松弛长度,和肌腱刚度,以匹配从超声成像报告的纤维长度和肌肉被动力-长度关系,以匹配报告的体内关节力矩-角度关系。使用调整后的参数,肌肉收缩得更等距,和比目鱼的工作范围比线性缩放参数更好地估计。此外,使用调整后的参数,模型中几乎所有肌肉兴奋的开/关时间与报告的肌电信号一致,与线性缩放参数相比,整个步态周期中的代谢率轨迹变化很大。我们的工具,免费在线提供,可以自定义的肌肉肌腱参数容易和适应纳入更多的实验数据。
    The workflow to simulate motion with recorded data usually starts with selecting a generic musculoskeletal model and scaling it to represent subject-specific characteristics. Simulating muscle dynamics with muscle-tendon parameters computed from existing scaling methods in literature, however, yields some inconsistencies compared to measurable outcomes. For instance, simulating fiber lengths and muscle excitations during walking with linearly scaled parameters does not resemble established patterns in the literature. This study presents a tool that leverages reported in vivo experimental observations to tune muscle-tendon parameters and evaluates their influence in estimating muscle excitations and metabolic costs during walking. From a scaled generic musculoskeletal model, we tuned optimal fiber length, tendon slack length, and tendon stiffness to match reported fiber lengths from ultrasound imaging and muscle passive force-length relationships to match reported in vivo joint moment-angle relationships. With tuned parameters, muscle contracted more isometrically, and soleus\'s operating range was better estimated than with linearly scaled parameters. Also, with tuned parameters, on/off timing of nearly all muscles\' excitations in the model agreed with reported electromyographic signals, and metabolic rate trajectories varied significantly throughout the gait cycle compared to linearly scaled parameters. Our tool, freely available online, can customize muscle-tendon parameters easily and be adapted to incorporate more experimental data.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    我们可以依靠计算方法来准确分析复杂的文本吗?要回答这个问题,我们将预测德国文献评论情绪的不同字典和缩放方法与人类编码情绪的“黄金标准”进行了比较。文献综述构成了计算分析的具有挑战性的文本语料库,因为它们不仅包含不同的文本级别-例如,工作的总结和审稿人的评价——但也有微妙和模糊的语言元素。考虑到文献评论的细微差别,我们使用度量而不是二分法进行情绪分析。我们的分析结果表明,预制词典的预测情绪,它们的计算效率很高,需要最小的适应,与人类编码的情绪具有低到中等的相关性(r在0.32和0.39之间)。使用单词嵌入(预训练和自我训练)自行创建的词典的准确性要低得多(r在0.10和0.28之间)。考虑到种子选择的高编码强度和偶然性,以及我们在数据中发现的词嵌入的数据预处理程度,如果没有进一步的改编,我们不会推荐它们用于复杂的文本。虽然全自动方法似乎无法准确预测像我们这样的复杂文本的文本情感,我们发现与半自动方法(r约为0.6)的相关性相对较高,然而,需要对训练数据集进行密集的人类编码工作。除了说明计算方法在分析复杂文本语料库中的优势和局限性以及度量而不是文本情感的二进制尺度的潜力之外,我们还为研究人员在处理复杂文本时选择适当的预处理方法和程度提供了实用指南。
    Can we rely on computational methods to accurately analyze complex texts? To answer this question, we compared different dictionary and scaling methods used in predicting the sentiment of German literature reviews to the \"gold standard\" of human-coded sentiments. Literature reviews constitute a challenging text corpus for computational analysis as they not only contain different text levels-for example, a summary of the work and the reviewer\'s appraisal-but are also characterized by subtle and ambiguous language elements. To take the nuanced sentiments of literature reviews into account, we worked with a metric rather than a dichotomous scale for sentiment analysis. The results of our analyses show that the predicted sentiments of prefabricated dictionaries, which are computationally efficient and require minimal adaption, have a low to medium correlation with the human-coded sentiments (r between 0.32 and 0.39). The accuracy of self-created dictionaries using word embeddings (both pre-trained and self-trained) was considerably lower (r between 0.10 and 0.28). Given the high coding intensity and contingency on seed selection as well as the degree of data pre-processing of word embeddings that we found with our data, we would not recommend them for complex texts without further adaptation. While fully automated approaches appear not to work in accurately predicting text sentiments with complex texts such as ours, we found relatively high correlations with a semiautomated approach (r of around 0.6)-which, however, requires intensive human coding efforts for the training dataset. In addition to illustrating the benefits and limits of computational approaches in analyzing complex text corpora and the potential of metric rather than binary scales of text sentiment, we also provide a practical guide for researchers to select an appropriate method and degree of pre-processing when working with complex texts.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号