High-performance computing

高性能计算
  • 文章类型: Journal Article
    背景:从许多角度来看,心脏起搏器仍然是一个悬而未决的问题。已经进行了广泛的实验和计算研究来描述不同尺度的窦房生理,从分子水平到临床水平。然而,目前尚不完全了解在窦房结内产生心跳并传播到工作心肌的机制。这项工作旨在提供有关这一迷人现象的定量信息,特别是关于细胞异质性和成纤维细胞对窦房结自律性和心房驱动的贡献。方法:我们开发了人类右心房组织的二维计算模型,包括窦房结.在基线组织模型的设计期间采用了解剖学和生理学方面的最新知识。这项研究的新颖之处在于考虑了窦房结内的细胞异质性和成纤维细胞,以研究它们在不同条件下调节刺激形成和传导的鲁棒性的方式(基线,离子电流块,自主调制,和外部高频起搏)。结果:模拟显示,在几乎所有测试的条件下,异质性和成纤维细胞都显着增加了超过10%的传导安全系数,并将过驱动抑制后的窦房结恢复时间缩短了60%。在人体模型中,尤其是在具有挑战性的条件下,成纤维细胞帮助异质肌细胞同步其速率(例如,在25nM的乙酰胆碱施用下,σCL中的-82%)并捕获心房(具有25%L型钙电流阻断)。然而,解剖和缝隙连接耦合方面仍然是允许有效心房兴奋的最重要的模型参数。结论:尽管所提出的模型存在局限性,这项工作为窦房结显示的惊人的整体异质性提供了定量解释。
    Background: Cardiac pacemaking remains an unsolved matter from many perspectives. Extensive experimental and computational studies have been performed to describe the sinoatrial physiology across different scales, from the molecular to clinical levels. Nevertheless, the mechanism by which a heartbeat is generated inside the sinoatrial node and propagated to the working myocardium is not fully understood at present. This work aims to provide quantitative information about this fascinating phenomenon, especially regarding the contributions of cellular heterogeneity and fibroblasts to sinoatrial node automaticity and atrial driving. Methods: We developed a bidimensional computational model of the human right atrial tissue, including the sinoatrial node. State-of-the-art knowledge of the anatomical and physiological aspects was adopted during the design of the baseline tissue model. The novelty of this study is the consideration of cellular heterogeneity and fibroblasts inside the sinoatrial node for investigating the manner by which they tune the robustness of stimulus formation and conduction under different conditions (baseline, ionic current blocks, autonomic modulation, and external high-frequency pacing). Results: The simulations show that both heterogeneity and fibroblasts significantly increase the safety factor for conduction by more than 10% in almost all the conditions tested and shorten the sinus node recovery time after overdrive suppression by up to 60%. In the human model, especially under challenging conditions, the fibroblasts help the heterogeneous myocytes to synchronise their rate (e.g. -82% in σ C L under 25 nM of acetylcholine administration) and capture the atrium (with 25% L-type calcium current block). However, the anatomical and gap junctional coupling aspects remain the most important model parameters that allow effective atrial excitations. Conclusion: Despite the limitations to the proposed model, this work suggests a quantitative explanation to the astonishing overall heterogeneity shown by the sinoatrial node.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    此处介绍和讨论的是软件解决方案的实现,该解决方案在动态金刚石砧室技术中进行的快速动态压缩实验期间提供了快速的X射线衍射数据分析。它包括高效的数据收集,将数据和元数据流传输到高性能群集(HPC),集群上的快速方位数据集成,以及使用DIOPTAS软件包控制数据处理步骤和可视化数据的工具。这种数据处理管道对于大量研究是无价的。通过两个在高压下收集的氨水混合物和多相矿物组件的数据示例说明了管道的潜力。管道的设计本质上是通用的,可以很容易地适应为许多其他X射线衍射技术提供快速反馈,例如,大批量印刷机研究,原位应力/应变研究,相变研究,用高分辨率衍射等研究的化学反应。
    Presented and discussed here is the implementation of a software solution that provides prompt X-ray diffraction data analysis during fast dynamic compression experiments conducted within the dynamic diamond anvil cell technique. It includes efficient data collection, streaming of data and metadata to a high-performance cluster (HPC), fast azimuthal data integration on the cluster, and tools for controlling the data processing steps and visualizing the data using the DIOPTAS software package. This data processing pipeline is invaluable for a great number of studies. The potential of the pipeline is illustrated with two examples of data collected on ammonia-water mixtures and multiphase mineral assemblies under high pressure. The pipeline is designed to be generic in nature and could be readily adapted to provide rapid feedback for many other X-ray diffraction techniques, e.g. large-volume press studies, in situ stress/strain studies, phase transformation studies, chemical reactions studied with high-resolution diffraction etc.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    每年,超过1900万例癌症被诊断,这个数字每年都在增加。由于标准治疗方案对不同类型的癌症有不同的成功率,了解个体肿瘤的生物学变得至关重要,特别是对于难以治疗的病例。个性化的高通量分析,使用下一代测序,允许全面检查活检标本。此外,这项技术的广泛使用产生了关于癌症特异性基因改变的大量信息。然而,已确定的改变与已证实的对蛋白质功能的影响之间存在显著差距.这里,我们提出了一个生物信息学管道,能够快速分析错义突变对已知致癌蛋白的稳定性和功能的影响。该管道与一个预测器相结合,该预测器汇总了整个管道中使用的不同工具的输出,提供单个概率得分,达到86%以上的平衡精度。该管道采用了虚拟筛选方法,以建议考虑使用FDA/EMA批准的潜在药物进行治疗。我们展示了三个案例研究,以证明该管道的及时实用性。为了促进癌症相关突变的获取和分析,我们把管道打包成一个网络服务器,它可以在https://loschmidt上免费获得。Chemi.Muni.cz/prejectonco/。科学贡献这项工作提出了一种新颖的生物信息学管道,该管道集成了多种计算工具来预测错义突变对肿瘤学感兴趣的蛋白质的影响。管道独特地结合了快速蛋白质建模,稳定性预测,以及虚拟药物筛选的进化分析,同时为精准肿瘤学提供可操作的见解。这种全面的方法通过自动解释突变并建议潜在的治疗方法,超越了现有的工具。从而努力弥合测序数据与临床应用之间的差距。
    Every year, more than 19 million cancer cases are diagnosed, and this number continues to increase annually. Since standard treatment options have varying success rates for different types of cancer, understanding the biology of an individual\'s tumour becomes crucial, especially for cases that are difficult to treat. Personalised high-throughput profiling, using next-generation sequencing, allows for a comprehensive examination of biopsy specimens. Furthermore, the widespread use of this technology has generated a wealth of information on cancer-specific gene alterations. However, there exists a significant gap between identified alterations and their proven impact on protein function. Here, we present a bioinformatics pipeline that enables fast analysis of a missense mutation\'s effect on stability and function in known oncogenic proteins. This pipeline is coupled with a predictor that summarises the outputs of different tools used throughout the pipeline, providing a single probability score, achieving a balanced accuracy above 86%. The pipeline incorporates a virtual screening method to suggest potential FDA/EMA-approved drugs to be considered for treatment. We showcase three case studies to demonstrate the timely utility of this pipeline. To facilitate access and analysis of cancer-related mutations, we have packaged the pipeline as a web server, which is freely available at https://loschmidt.chemi.muni.cz/predictonco/ .Scientific contributionThis work presents a novel bioinformatics pipeline that integrates multiple computational tools to predict the effects of missense mutations on proteins of oncological interest. The pipeline uniquely combines fast protein modelling, stability prediction, and evolutionary analysis with virtual drug screening, while offering actionable insights for precision oncology. This comprehensive approach surpasses existing tools by automating the interpretation of mutations and suggesting potential treatments, thereby striving to bridge the gap between sequencing data and clinical application.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本文介绍了一套微型应用程序(微型应用程序),旨在优化从头算电子结构代码中的计算内核。该套件是从参与NOMAD卓越中心的旗舰应用开发的,例如ELPA特征求解器库和令人兴奋的GW实现,Abinit,和FHI目标代码。通过针对对父应用程序中的总执行时间做出重大贡献的功能来识别迷你应用程序。这种战略选择允许集中的优化工作。该套件旨在轻松部署在各种高性能计算(HPC)系统上,由集成的CMake构建系统支持,用于直接编译和执行。目的是利用新兴(后)exascale系统的功能,这需要并发的硬件和软件开发-一个被称为协同设计的概念。迷你应用程序套件可作为分析和基准测试的工具,提供可以指导软件优化和硬件设计的见解。最终,这些发展将能够更准确和有效地模拟新材料,在材料科学研究中充分利用exascale计算的潜力。
    This article introduces a suite of mini-applications (mini-apps) designed to optimise computational kernels in ab initio electronic structure codes. The suite is developed from flagship applications participating in the NOMAD Center of Excellence, such as the ELPA eigensolver library and the GW implementations of the exciting, Abinit, and FHI-aims codes. The mini-apps were identified by targeting functions that significantly contribute to the total execution time in the parent applications. This strategic selection allows for concentrated optimisation efforts. The suite is designed for easy deployment on various High-Performance Computing (HPC) systems, supported by an integrated CMake build system for straightforward compilation and execution. The aim is to harness the capabilities of emerging (post)exascale systems, which necessitate concurrent hardware and software development - a concept known as co-design. The mini-app suite serves as a tool for profiling and benchmarking, providing insights that can guide both software optimisation and hardware design. Ultimately, these developments will enable more accurate and efficient simulations of novel materials, leveraging the full potential of exascale computing in material science research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在强烈的动态冲击下,有助于在冷凝介质中产生断裂波的物理机制尚未得到充分研究。假设之一是该过程与材料的块状结构有关。随着加载波的通过,块之间的柔性夹层破裂,释放块中自平衡初始应力的能量,支持骨折波的运动。我们提出了一种新的有效的数值方法,用于分析具有复杂流变特性的块状介质的薄夹层中裂纹系统传播的波动性质。该方法基于弹塑性材料变形的本构关系的变分公式,以及块体通过夹层接触相互作用的条件。我们开发了一种并行计算算法,用于具有集群体系结构的超级计算机实现该方法。给出了分布脉冲扰动作用下钢化玻璃中断裂波传播的数值模拟结果。本文是“力学中应用的非光滑变分问题”主题的一部分。
    Physical mechanisms that contribute to the generation of fracture waves in condensed media under intensive dynamic impacts have not been fully studied. One of the hypotheses is that this process is associated with the blocky structure of a material. As the loading wave passes, the compliant interlayers between blocks are fractured, releasing the energy of self-balanced initial stresses in the blocks, which supports the motion of the fracture wave. We propose a new efficient numerical method for the analysis of the wave nature of the propagation of a system of cracks in thin interlayers of a blocky medium with complex rheological properties. The method is based on a variational formulation of the constitutive relations for the deformation of elastic-plastic materials, as well as the conditions for contact interaction of blocks through interlayers. We have developed a parallel computational algorithm that implements this method for supercomputers with cluster architecture. The results of the numerical simulation of the fracture wave propagation in tempered glass under the action of distributed pulse disturbances are presented. This article is part of the theme issue \'Non-smooth variational problems with applications in mechanics\'.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    广阔的科学软件生态系统,以各种平台和格式的数百万本书为特征,在保持科学研究的可重复性和来源方面提出了重大挑战。独立开发的应用程序的多样性,不断发展的版本和异构组件突出了需要严格的方法来导航这些复杂性。为了应对这些挑战,SBGrid团队构建,安装和配置530多个专业软件应用程序,用于SBGrid联盟成员的本地和基于云的计算环境。为了解决支持这种多样化应用程序集合的复杂性,该团队开发了胶囊软件执行环境,通常被称为胶囊。Capsules依赖于一组以编程方式生成的bash脚本,这些脚本协同工作以将一个应用程序的运行时环境与所有其他应用程序隔离开来。从而提供透明的跨平台解决方案,而不需要专门的工具或提高研究人员的帐户权限。胶囊便于模块化,安全软件分发,同时保持集中式,无冲突环境。SBGrid平台,将胶囊与结构生物学应用的SBGrid集合相结合,通过增强可查找性与FAIR目标保持一致,可访问性,科学软件的互操作性和可重用性,确保跨不同计算环境的无缝功能。它的适应性使其能够应用于结构生物学以外的其他科学领域。
    The expansive scientific software ecosystem, characterized by millions of titles across various platforms and formats, poses significant challenges in maintaining reproducibility and provenance in scientific research. The diversity of independently developed applications, evolving versions and heterogeneous components highlights the need for rigorous methodologies to navigate these complexities. In response to these challenges, the SBGrid team builds, installs and configures over 530 specialized software applications for use in the on-premises and cloud-based computing environments of SBGrid Consortium members. To address the intricacies of supporting this diverse application collection, the team has developed the Capsule Software Execution Environment, generally referred to as Capsules. Capsules rely on a collection of programmatically generated bash scripts that work together to isolate the runtime environment of one application from all other applications, thereby providing a transparent cross-platform solution without requiring specialized tools or elevated account privileges for researchers. Capsules facilitate modular, secure software distribution while maintaining a centralized, conflict-free environment. The SBGrid platform, which combines Capsules with the SBGrid collection of structural biology applications, aligns with FAIR goals by enhancing the findability, accessibility, interoperability and reusability of scientific software, ensuring seamless functionality across diverse computing environments. Its adaptability enables application beyond structural biology into other scientific fields.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    超冷原子为模拟量子计算机提供了一个平台,能够模拟量子湍流,这种湍流是令人困惑的现象的基础,比如快速旋转的中子星中的脉冲星故障。与液氦等其他平台不同,超冷原子有一个可行的动力学理论框架,但是模拟推动了当前经典计算机的边缘。我们介绍了迄今为止最大的费米子量子湍流模拟,并解释了所需的计算技术,特别是对Petaflop应用程序的特征值sofers库的改进,使我们能够对矩阵的记录大小(百万乘百万)。通过使用涡旋的内部结构作为局部有效温度的新探针,我们量化了费米子量子湍流中的耗散和热化过程。所有模拟数据和源代码都可用,以促进超冷费米气体领域的快速科学进步。
    Ultracold atoms provide a platform for analog quantum computer capable of simulating the quantum turbulence that underlies puzzling phenomena like pulsar glitches in rapidly spinning neutron stars. Unlike other platforms like liquid helium, ultracold atoms have a viable theoretical framework for dynamics, but simulations push the edge of current classical computers. We present the largest simulations of fermionic quantum turbulence to date and explain the computing technology needed, especially improvements in the Eigenvalue soLvers for Petaflop Applications library that enable us to diagonalize matrices of record size (millions by millions). We quantify how dissipation and thermalization proceed in fermionic quantum turbulence by using the internal structure of vortices as a new probe of the local effective temperature. All simulation data and source codes are made available to facilitate rapid scientific progress in the field of ultracold Fermi gases.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    基于DNA的鉴定对于生物标本的分类至关重要,然而,量化基于序列的分类分配的不确定性的方法很少。挑战来自嘈杂的参考数据库,包括错误标记的条目和丢失的分类单元。PROTAX用分类的概率方法解决了这些问题,推进仅依赖序列相似性的方法。它为部分填充的分类层次结构提供了校准的概率分配,对缺乏参考和不正确的分类学注释的分类单元进行核算。虽然在较小的尺度上有效,PROTAX的全球应用需要更大的参考库,以前被计算障碍阻碍的目标。我们介绍了PROTAX-GPU,一种可扩展的算法,能够利用全球生命条形码数据系统(>1400万个标本)作为参考数据库。使用图形处理单元(GPU)加速相似性和最近邻操作以及用于Python集成的JAX库,与基于中央处理单元(CPU)的实现相比,我们实现了超过1000倍的加速,而不会损害PROTAX的关键优势。PROTAX-GPU标志着向实时DNA条形码迈出了一大步,在环境评估中实现更快、更有效的物种识别。这种能力为生物多样性的实时监测和分析开辟了新的途径,提高我们理解和应对生态动态的能力。本文是“迈向全球昆虫生物多样性监测工具包”主题的一部分。
    DNA-based identification is vital for classifying biological specimens, yet methods to quantify the uncertainty of sequence-based taxonomic assignments are scarce. Challenges arise from noisy reference databases, including mislabelled entries and missing taxa. PROTAX addresses these issues with a probabilistic approach to taxonomic classification, advancing on methods that rely solely on sequence similarity. It provides calibrated probabilistic assignments to a partially populated taxonomic hierarchy, accounting for taxa that lack references and incorrect taxonomic annotation. While effective on smaller scales, global application of PROTAX necessitates substantially larger reference libraries, a goal previously hindered by computational barriers. We introduce PROTAX-GPU, a scalable algorithm capable of leveraging the global Barcode of Life Data System (>14 million specimens) as a reference database. Using graphics processing units (GPU) to accelerate similarity and nearest-neighbour operations and the JAX library for Python integration, we achieve over a 1000 × speedup compared with the central processing unit (CPU)-based implementation without compromising PROTAX\'s key benefits. PROTAX-GPU marks a significant stride towards real-time DNA barcoding, enabling quicker and more efficient species identification in environmental assessments. This capability opens up new avenues for real-time monitoring and analysis of biodiversity, advancing our ability to understand and respond to ecological dynamics. This article is part of the theme issue \'Towards a toolkit for global insect biodiversity monitoring\'.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    双光子光刻(TPL)是一种基于激光的增材制造技术,可以打印具有亚微米特征的任意复杂的厘米级聚合物3D结构。尽管已经研究了各种方法来实现在TPL中打印精细特征,实现快速亚100纳米3D打印仍然具有挑战性。一个关键的限制是,控制最小特征尺寸的理论和实践极限的物理现象并不为人所知。这里,我们在投影TPL(P-PTL)过程中研究了这些限制,这是TPL的高通量变体,其中一次打印整个2D层。我们量化了投影特征大小的影响,光功率,曝光时间,通过光聚合的有限元建模和光引发剂浓度对印刷特征尺寸的影响。通过动态规划方案,在超过10,000个组合的庞大参数集上快速执行模拟,它是在高性能计算资源上实现的。我们证明,对于精确且校准良好的P-TPL系统可实现的最小特征尺寸,没有基于物理的限制。尽管照明的离散性质。然而,实际上可实现的最小特征尺寸受到亚100nm范围内聚合物转化程度对加工参数的敏感性增加的限制。这里产生的见解可以作为快速发展的路线图,精确,和可预测的亚100纳米3D打印。
    Two-photon lithography (TPL) is a laser-based additive manufacturing technique that enables the printing of arbitrarily complex cm-scale polymeric 3D structures with sub-micron features. Although various approaches have been investigated to enable the printing of fine features in TPL, it is still challenging to achieve rapid sub-100 nm 3D printing. A key limitation is that the physical phenomena that govern the theoretical and practical limits of the minimum feature size are not well known. Here, we investigate these limits in the projection TPL (P-PTL) process, which is a high-throughput variant of TPL, wherein entire 2D layers are printed at once. We quantify the effects of the projected feature size, optical power, exposure time, and photoinitiator concentration on the printed feature size through finite element modeling of photopolymerization. Simulations are performed rapidly over a vast parameter set exceeding 10,000 combinations through a dynamic programming scheme, which is implemented on high-performance computing resources. We demonstrate that there is no physics-based limit to the minimum feature sizes achievable with a precise and well-calibrated P-TPL system, despite the discrete nature of illumination. However, the practically achievable minimum feature size is limited by the increased sensitivity of the degree of polymer conversion to the processing parameters in the sub-100 nm regime. The insights generated here can serve as a roadmap towards fast, precise, and predictable sub-100 nm 3D printing.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    空间有效载荷的计算性能要求不断提高,和空间级处理器的重新开发需要大量的时间和成本。本研究调查了为各种应用场景设计的处理器的性能评估基准。它还构建了专门为空间领域量身定制的基准模块和典型的空间应用程序基准。此外,该研究系统地评估和分析了NVIDIAJetsonAGXXavier平台和Loongson平台的性能,以确定适合太空任务的处理器。评估的实验结果表明,JetsonAGXXavier的性能非常好,并且在密集的计算过程中功耗较低。Loongson平台在某些并行优化计算中可以实现Xavier性能的80%,以更高的功耗为代价超越Xavier的性能。
    The computational performance requirements of space payloads are constantly increasing, and the redevelopment of space-grade processors requires a significant amount of time and is costly. This study investigates performance evaluation benchmarks for processors designed for various application scenarios. It also constructs benchmark modules and typical space application benchmarks specifically tailored for the space domain. Furthermore, the study systematically evaluates and analyzes the performance of NVIDIA Jetson AGX Xavier platform and Loongson platforms to identify processors that are suitable for space missions. The experimental results of the evaluation demonstrate that Jetson AGX Xavier performs exceptionally well and consumes less power during dense computations. The Loongson platform can achieve 80% of Xavier\'s performance in certain parallel optimized computations, surpassing Xavier\'s performance at the expense of higher power consumption.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号