open source

开源
  • 文章类型: Journal Article
    海洋生态系统的群落结构和生态功能严重依赖于浮游植物。然而,由于缺乏有关浮游植物形态的详细信息,我们对浮游植物的了解有限。为了解决这个差距,我们开发了一个框架,将扫描电子显微镜(SEM)与摄影测量相结合,以创建逼真的3D(三维)浮游植物模型。使用两种海洋藻类物种演示了该框架的工作流程,一种甲藻甲藻原甲藻和一种硅藻Halamphorasp。由此产生的3D模型是公开可用的,并允许用户与浮游植物及其复杂结构进行虚拟(数字)和有形(3D打印)交互。它们还允许浮游植物的表面积和生物体积计算,以及对它们光散射特性的探索,这对生态系统建模都很重要。此外,通过向公众展示这些模型,它弥合了科学探究和教育之间的差距,提高人们对浮游植物重要性的认识。
    The community structure and ecological function of marine ecosystems are critically dependent on phytoplankton. However, our understanding of phytoplankton is limited due to the lack of detailed information on their morphology. To address this gap, we developed a framework that combines scanning electron microscopy (SEM) with photogrammetry to create realistic 3D (three-dimensional) models of phytoplankton. The workflow of this framework is demonstrated using two marine algal species, one dinoflagellate Prorocentrum micans and one diatom Halamphora sp. The resulting 3D models are made openly available and allow users to interact with phytoplankton and their complex structures virtually (digitally) and tangibly (3D printing). They also allow for surface area and biovolume calculations of phytoplankton, as well as the exploration of their light scattering properties, which are both important for ecosystem modeling. Additionally, by presenting these models to the public, it bridges the gap between scientific inquiry and education, promoting broader awareness on the importance of phytoplankton.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在神经监测和解码的交叉点,基于脑电图(EEG)的事件相关电位(ERP)为内在脑功能打开了一个窗口。ERP的稳定性使其在神经科学领域得到了广泛的应用。然而,特定于项目的自定义代码,跟踪用户定义的参数,商业工具的多样性限制了临床应用。
    我们介绍一个开源的,用户友好,和可重复的MATLAB工具箱称为EPAT,包括各种算法的脑电图数据预处理。它提供了基于EEGLAB的模板管道,用于对EEG进行高级多处理,脑磁图,和多导睡眠图数据。参与者评估了EEGLAB和EPAT的14个指标,满意度评分使用Wilcoxon符号秩检验或基于分布正态的配对t检验进行分析。
    EPAT简化了EEG信号浏览和预处理,脑电功率谱分析,独立成分分析,时频分析,ERP波形图,和头皮电压的拓扑分析。用户友好的图形用户界面允许没有编程背景的临床医生和研究人员使用EPAT。
    本文介绍的体系结构,功能,和工具箱的工作流程。EPAT的发布将有助于推进脑电图方法学及其在临床转化研究中的应用。
    UNASSIGNED: At the intersection of neural monitoring and decoding, event-related potential (ERP) based on electroencephalography (EEG) has opened a window into intrinsic brain function. The stability of ERP makes it frequently employed in the field of neuroscience. However, project-specific custom code, tracking of user-defined parameters, and the large diversity of commercial tools have limited clinical application.
    UNASSIGNED: We introduce an open-source, user-friendly, and reproducible MATLAB toolbox named EPAT that includes a variety of algorithms for EEG data preprocessing. It provides EEGLAB-based template pipelines for advanced multi-processing of EEG, magnetoencephalography, and polysomnogram data. Participants evaluated EEGLAB and EPAT across 14 indicators, with satisfaction ratings analyzed using the Wilcoxon signed-rank test or paired t-test based on distribution normality.
    UNASSIGNED: EPAT eases EEG signal browsing and preprocessing, EEG power spectrum analysis, independent component analysis, time-frequency analysis, ERP waveform drawing, and topological analysis of scalp voltage. A user-friendly graphical user interface allows clinicians and researchers with no programming background to use EPAT.
    UNASSIGNED: This article describes the architecture, functionalities, and workflow of the toolbox. The release of EPAT will help advance EEG methodology and its application to clinical translational studies.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    近红外光谱技术应用的一个重要步骤是光谱预处理。它的合理实施保证了有效光谱信息的正确提取,此外,模型的准确性得到了提高。然而,一些分析师的研究仍然使用手工的试错方法,尤其是那些不那么熟练的人。以前的论文提供了NIR的预处理优化算法,但是仍然有一些问题需要解决,例如预处理方法的笨拙序列确定或,波动的优化结果,或者,缺乏足够的统计信息。本研究提出了一种名为自扩展全信息优化策略的频谱自动分析方法,一种新的强大的开源技术,可以同时解决上述所有问题。这是化学计量学领域的第一次,该算法提供了一种可靠有效的基于统计信息学的近红外自动建模方法。借助其内置模块,例如信息生成器,频谱处理器,等。,它能够完全搜索常见的预处理技术,这是通过蒙特卡罗交叉验证确定的。然后采用优化的预处理方案建立最终的集成校准模型,随着波长变量筛选算法。优化策略可以为用户提供在整个建模过程中创建的客观有用的统计信息,以进一步检查模型的有效性。结果表明,通过对两组实际的近红外光谱数据进行测试,该方法可以轻松,成功地提取光谱信息并建立校准模型。此外,这种优化策略也可以应用于其他频谱分析领域,如拉曼光谱或红外光谱,通过改变它的一些参数,具有非凡的应用价值。
    An essential step in the application of near infrared spectroscopy technology is the spectrum preprocessing. A reasonable implementation of it ensures that the effective spectral information is correctly extracted and, also that the model\'s accuracy is increased. However, some analysts\' research still uses the manual approach of trial and error, particularly those less skilled ones. Previous papers have provided preprocessing optimization algorithms for NIR, but there are still some problems that need to be resolved, such as the unwieldy sequence determination of preprocessing method or, the fluctuated optimization outcomes or, lack of sufficient statistical information. This research suggests a spectrum auto-analysis methodology named self-expansion full information optimization strategy, a new powerful open-source technique for concurrently addressing all of these above issues simultaneously. For the first time in the field of chemometrics, this algorithm offers a reliable and effective automatic near infrared auto-modelling method based on the statistical informatics. With the aid of its built-in modules, such as information generators, spectrum processors, etc., it is able to fully search the common preprocessing techniques, which is determined by Monte Carlo cross validation. Then the final ensemble calibration model is built by employing the optimized preprocessing schemes, along with the wavelength variables screening algorithm. The optimization strategy can offer the user objective useful statistics information created throughout the modeling process to further examine the model\'s effectiveness. The results demonstrate that the suggested method can easily and successfully extract spectrum information and develop calibration models by putting it to the test on two groups of actual near-infrared spectral data. Additionally, this optimization strategy can also be applied to other spectrum analysis areas, such Raman spectroscopy or infrared spectroscopy, by changing a few of its parameters, and has extraordinary application value.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    简介:在本文中,我们介绍了用于座椅舒适度评估的成人大小的FE全身HBM,并根据压力分布和接触力在不同的静态座椅条件下进行了验证。方法:我们借助包括体表扫描在内的不同目标源,将PIPERChild模型变形为男性成人大小的模型,以及脊柱和骨盆骨表面和一个开源的全身骨骼。我们还介绍了坐骨结节(IT)下的软组织滑动。初始模型适用于具有低模量软组织材料特性和臀部区域网格细化的座椅应用。等。我们将使用成人HBM模拟的接触力和压力相关参数与从数据用于模型开发的人那里通过实验获得的参数进行了比较。四个座椅配置,座板角度从0°到15°变化,座椅到靠背的角度固定在100°,进行了测试。结果:成人HBM能正确模拟靠背上的接触力,座盘,和脚支撑,在水平和垂直方向上的平均误差小于22.3N和15.5N,考虑到体重(785N),这是很小的。就接触面积而言,峰值,和平均压力,仿真结果与试验结果吻合良好。软组织滑动,较高的软组织压迫与最近MRI研究的观察结果一致.讨论:本成人模型可以用作使用PIPER中提出的变形工具的参考。该模型将作为PIPER开源项目的一部分公开在线发布(www。PIPER-project.org),以促进其重用和改进以及对不同应用程序的特定适应。
    Introduction: In this paper we introduce an adult-sized FE full-body HBM for seating comfort assessments and present its validation in different static seating conditions in terms of pressure distribution and contact forces. Methods: We morphed the PIPER Child model into a male adult-sized model with the help of different target sources including his body surface scans, and spinal and pelvic bone surfaces and an open sourced full body skeleton. We also introduced soft tissue sliding under the ischial tuberosities (ITs). The initial model was adapted for seating applications with low modulus soft tissue material property and mesh refinements for buttock regions, etc. We compared the contact forces and pressure-related parameters simulated using the adult HBM with those obtained experimentally from the person whose data was used for the model development. Four seat configurations, with the seat pan angle varying from 0° to 15° and seat-to-back angle fixed at 100°, were tested. Results: The adult HBM could correctly simulate the contact forces on the backrest, seat pan, and foot support with an average error of less than 22.3 N and 15.5 N in the horizontal and vertical directions, which is small considering the body weight (785 N). In terms of contact area, peak, and mean pressure, the simulation matched well with the experiment for the seat pan. With soft tissue sliding, higher soft tissue compression was obtained in agreement with the observations from recent MRI studies. Discussion: The present adult model could be used as a reference using a morphing tool as proposed in PIPER. The model will be published openly online as part of the PIPER open-source project (www.PIPER-project.org) to facilitate its reuse and improvement as well as its specific adaptation for different applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    系统发育工具是进化关系研究的基础。在本文中,我们介绍Ksak,一种用于无比对系统发育分析的新型高通量工具。Ksak计算分子序列之间的成对距离矩阵,使用七个广泛接受的基于k-mer的距离度量。基于距离矩阵,Ksak用标准算法构建系统发育树。当使用黄金标准16SrRNA数据集进行基准测试时,发现Ksak是所比较的所有五种工具中最准确的工具,并且比高精度多序列比对器ClustalW2的准确度高19%。最重要的是,Ksak比ClustalW2快几十倍到几百倍,这有助于消除当前在大规模多序列比对中遇到的计算限制。Ksak可在https://github.com/labxscut/ksak免费获得。
    Phylogenetic tools are fundamental to the studies of evolutionary relationships. In this paper, we present Ksak, a novel high-throughput tool for alignment-free phylogenetic analysis. Ksak computes the pairwise distance matrix between molecular sequences, using seven widely accepted k-mer based distance measures. Based on the distance matrix, Ksak constructs the phylogenetic tree with standard algorithms. When benchmarked with a golden standard 16S rRNA dataset, Ksak was found to be the most accurate tool among all five tools compared and was 19% more accurate than ClustalW2, a high-accuracy multiple sequence aligner. Above all, Ksak was tens to hundreds of times faster than ClustalW2, which helps eliminate the computation limit currently encountered in large-scale multiple sequence alignment. Ksak is freely available at https://github.com/labxscut/ksak.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    我们介绍了TIGRE-VarianCBCT,Varian机载锥束CT的开源工具包Matlab-GPU,特别强调解决原始数据预处理中的挑战,伪影校正,断层重建和图像后处理。该项目的目的是不仅提供一种工具来弥合CBCT扫描数据的临床使用与研究算法之间的差距,而且还提供一种框架,将成像链分解为单个过程,以便研究工作可以集中在特定部分。整个成像链,基于模块的架构,介绍了创建工具包时使用的数据流和技术。首先解码原始扫描数据以提取X射线荧光图像系列并设置成像几何结构。数据调节操作,包括散射校正,归一化,光束硬化校正,环去除顺序执行。TIGRE使用FDK以及各种迭代算法支持重建。像素到HU映射由CatphanTM504体模校准。以经验公式计算CTDIw中的成像剂量。在真实患者扫描上验证了性能,与供应商设计的程序具有良好的一致性。扫描协议优化中的案例研究,低剂量成像和迭代算法比较证明了其在执行基于扫描数据的临床研究方面的巨大潜力。该工具包是根据BSD许可证发布的,对其使用和分发施加最小的限制。该工具包可作为模块访问https://github.com/CERN/TIGRE。
    We presented TIGRE-VarianCBCT, an open-source toolkit Matlab-GPU for Varian on-board cone-beam CT with particular emphasis to address challenges in raw data preprocessing, artifacts correction, tomographic reconstruction and image post-processing. The aim of this project is to provide not only a tool to bridge the gap between clinical usage of CBCT scan data and research algorithms but also a framework that breaks down the imaging chain into individual processes so that research effort can be focused on a specific part. The entire imaging chain, module-based architecture, data flow and techniques used in the creation of the toolkit are presented. Raw scan data are first decoded to extract X-ray fluoro image series and set up the imaging geometry. Data conditioning operations including scatter correction, normalization, beam-hardening correction, ring removal are performed sequentially. Reconstruction is supported by TIGRE with FDK as well as a variety of iterative algorithms. Pixel-to-HU mapping is calibrated by a CatphanTM 504 phantom. Imaging dose in CTDIw is calculated in an empirical formula. The performance was validated on real patient scans with good agreement with respect to vendor-designed program. Case studies in scan protocol optimization, low dose imaging and iterative algorithm comparison demonstrated its substantial potential in performing scan data based clinical studies. The toolkit is released under the BSD license, imposing minimal restrictions on its use and distribution. The toolkit is accessible as a module at https://github.com/CERN/TIGRE.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    计算机断层扫描(CT)在医学诊断中得到了广泛的应用,无损评估,国土安全,以及其他科学和工程应用。图像重建是CT成像的核心技术之一。在这篇综述论文中,我们系统地回顾了目前公开可用的CT图像重建开源工具包在其环境方面,对象模型,成像,几何形状和算法。除了分析和迭代算法,深度学习重建网络和开放代码也被认为是第三类重建算法。对公开可用的软件平台的系统总结将有助于促进CT的研究和开发。
    Computed tomography (CT) has been widely applied in medical diagnosis, nondestructive evaluation, homeland security, and other science and engineering applications. Image reconstruction is one of the core CT imaging technologies. In this review paper, we systematically reviewed the currently publicly available CT image reconstruction open source toolkits in the aspects of their environments, object models, imaging geometries, and algorithms. In addition to analytic and iterative algorithms, deep learning reconstruction networks and open codes are also reviewed as the third category of reconstruction algorithms. This systematic summary of the publicly available software platforms will help facilitate CT research and development.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    光片荧光显微镜(LSFM)允许高速和低光毒性的体积实时成像。各种LSFM模式是市售的,但是它们的规模和成本限制了研究界的访问。一种新方法,称为亚体素分辨(SVR)光片附加显微镜(SLAM),是为了实现快速,分辨率增强的光片荧光成像从传统的宽视场显微镜。此方法包含两个组件:常规广角显微镜的微型附加设备,其中包含水平激光片照明路径,以将荧光团激发限制在焦平面附近以进行光学切片;离轴扫描策略和SVR算法,该算法利用子体素空间移位来重建图像体积,从而使分辨率提高两倍。SLAM方法已应用于观察爬行型秀丽隐杆线虫的肌肉活动变化,斑马鱼胚胎发育的心跳,以及清除老鼠大脑的神经解剖结构,高时空分辨率。它提供了一种高效且具有成本效益的解决方案,可将大量在职显微镜转换为具有体素超分辨能力的快速3D实时成像。
    Light-sheet fluorescence microscopy (LSFM) allows volumetric live imaging at high-speed and with low photo-toxicity. Various LSFM modalities are commercially available, but their size and cost limit their access by the research community. A new method, termed sub-voxel-resolving (SVR) light-sheet add-on microscopy (SLAM), is presented to enable fast, resolution-enhanced light-sheet fluorescence imaging from a conventional wide-field microscope. This method contains two components: a miniature add-on device to regular wide-field microscopes, which contains a horizontal laser light-sheet illumination path to confine fluorophore excitation at the vicinity of the focal plane for optical sectioning; an off-axis scanning strategy and a SVR algorithm that utilizes sub-voxel spatial shifts to reconstruct the image volume that results in a twofold increase in resolution. SLAM method has been applied to observe the muscle activity change of crawling C. elegans, the heartbeat of developing zebrafish embryo, and the neural anatomy of cleared mouse brains, at high spatiotemporal resolution. It provides an efficient and cost-effective solution to convert the vast number of in-service microscopes for fast 3D live imaging with voxel-super-resolved capability.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    The integration of greater functionalities into vehicles increases the complexity of car-controlling. Many research efforts are dedicated to designing car-controlling systems that allow users to instruct the car just to show it what it should do; however, for non-expert users, controlling the car with a remote or a switch is complicated. So, keeping this in mind, this paper presents an Arduino based car-controlling system that no longer requires manual control of the cars. Two main contributions are presented in this work. Firstly, we show that the car can be controlled with hand-gestures, according to the movement and position of the hand. The hand-gesture system works with an Arduino Nano, accelerometer, and radio-frequency (RF) transmitter. The accelerometer (attached with the hand-glove) senses the acceleration forces that are produced by the hand movement, and it will transfer the data to the Arduino Nano that is placed on hand glove. After receiving the data, Arduino Nano will convert it into different angle values in ranges of 0°⁻450° and send the data to the RF receiver of the Arduino Uno, which is placed on the car through the RF transmitter. Secondly, the proposed car system is to be controlled by an android based mobile-application with different modes (e.g., touch buttons mode, voice recognition mode). The mobile-application system is the extension of the hand-gesture system with the addition of Bluetooth module. In this case, whenever the user presses any of the touch buttons in the application, and/or gives voice commands, the corresponding signal is sent to the Arduino Uno. After receiving the signal, Arduino will check this against its predefined instructions for moving forward, backward, left, right, and brake; then it will send the command to the motor module to move the car in the corresponding direction. In addition, an automatic obstacle detection system is introduced to improve the safety measurements to avoid any hazards with the help of sensors placed at the front of the car. The proposed systems are designed as a lab-scale prototype to experimentally validate the efficiency, accuracy, and affordability of the systems. The experimental results prove that the proposed work has all in one capability (hand-gesture, touch buttons and voice-recognition with mobile-application, obstacle detection), is very easy to use, and can be easily assembled in a simple hardware circuit. We remark that the proposed systems can be implemented under real conditions at large-scale in the future, which will be useful in automobiles and robotics applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:测试两个变量的依赖性是统计学中的基本任务之一。在这项工作中,我们开发了一个开源R包(knnAUC),用于检测一个连续变量X和一个二进制因变量Y(0或1)之间的非线性相关性。
    结果:我们通过使用knnAUC(k-最近邻AUC检验,R包可在https://sourceforge.net/projects/knnauc/)上找到。在knnAUC软件框架中,我们首先根据样本比率(从0到1)重新采样数据集以得到训练和测试数据集,然后构造一个k最近邻算法分类器,得到testy(测试数据集的真实标签)的yhat估计器(y=1的概率)。最后,我们计算了AUC(接受者工作特征曲线下面积)估计器,并测试了AUC估计器是否大于0.5。为了评估knnAUC与其他七种流行方法相比的优势,我们进行了广泛的模拟,以探索八种不同方法之间的关系,并使用模拟和真实数据集(慢性乙型肝炎数据集和肾癌RNA-seq数据集)比较了假阳性率和统计能力。
    结论:我们得出结论,knnAUC是一种有效的R包,用于测试一个连续变量和一个二元因变量之间的非线性依赖性,尤其是在计算生物学领域。
    BACKGROUND: Testing the dependence of two variables is one of the fundamental tasks in statistics. In this work, we developed an open-source R package (knnAUC) for detecting nonlinear dependence between one continuous variable X and one binary dependent variables Y (0 or 1).
    RESULTS: We addressed this problem by using knnAUC (k-nearest neighbors AUC test, the R package is available at https://sourceforge.net/projects/knnauc/ ). In the knnAUC software framework, we first resampled a dataset to get the training and testing dataset according to the sample ratio (from 0 to 1), and then constructed a k-nearest neighbors algorithm classifier to get the yhat estimator (the probability of y = 1) of testy (the true label of testing dataset). Finally, we calculated the AUC (area under the curve of receiver operating characteristic) estimator and tested whether the AUC estimator is greater than 0.5. To evaluate the advantages of knnAUC compared to seven other popular methods, we performed extensive simulations to explore the relationships between eight different methods and compared the false positive rates and statistical power using both simulated and real datasets (Chronic hepatitis B datasets and kidney cancer RNA-seq datasets).
    CONCLUSIONS: We concluded that knnAUC is an efficient R package to test non-linear dependence between one continuous variable and one binary dependent variable especially in computational biology area.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

公众号