information storage and retrieval

信息存储和检索
  • 文章类型: Journal Article
    为了探索复杂的生物学问题,通常需要从公共数据存储库中访问各种数据类型。随着生物序列数据的数量和复杂性的增长,公共存储库在确保数据易于被生物研究界发现和使用方面面临重大挑战。为了应对这些挑战,国家生物技术信息中心(NCBI)创建了NCBI数据集。此资源提供了简单的,全面,以及对生物序列的可扩展访问,注释,和各种分类单元的元数据。FollowingtheFAIR(Findable,可访问,互操作,和可重用)数据管理原则,NCBI数据集提供用户友好的Web界面,命令行工具,和文档化的API,使研究人员能够无缝访问NCBI数据。数据作为序列和元数据的包交付,从而促进改进的数据检索,分享,和研究中的可用性。此外,这种数据交付方式促进了有效的数据归属,并促进了数据的进一步重用。本文概述了通过NCBI数据集访问的数据的当前范围,并解释了探索和下载数据的各种选项。
    To explore complex biological questions, it is often necessary to access various data types from public data repositories. As the volume and complexity of biological sequence data grow, public repositories face significant challenges in ensuring that the data is easily discoverable and usable by the biological research community. To address these challenges, the National Center for Biotechnology Information (NCBI) has created NCBI Datasets. This resource provides straightforward, comprehensive, and scalable access to biological sequences, annotations, and metadata for a wide range of taxa. Following the FAIR (Findable, Accessible, Interoperable, and Reusable) data management principles, NCBI Datasets offers user-friendly web interfaces, command-line tools, and documented APIs, empowering researchers to access NCBI data seamlessly. The data is delivered as packages of sequences and metadata, thus facilitating improved data retrieval, sharing, and usability in research. Moreover, this data delivery method fosters effective data attribution and promotes its further reuse. This paper outlines the current scope of data accessible through NCBI Datasets and explains various options for exploring and downloading the data.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    暂无摘要。
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:妊娠作为心血管压力测试。虽然许多并发症在出生后解决,妊娠合并高血压疾病的女性长期患心血管疾病(CVD)的风险增加.监测产后健康可以减少这种风险,但需要更好的方法来识别高风险妇女,以便及时进行干预。
    方法:采用定性描述性研究设计,进行了焦点小组和/或访谈,分别聘请公共贡献者和临床专业人员。通过社交媒体便利抽样招募了不同的参与者。半结构化,主持人主导的讨论探讨了当前产后评估的观点,以及将患者电子医疗数据与开发识别有CVD风险的产后妇女的数字工具联系起来的态度.参与者的观点是使用便利贴或主持人抄写员收集的,并进行了主题分析。
    结果:来自27个公共贡献者和7个临床贡献者,制定了关于产后检查期望与现实的五个主题,包括“有限资源”,\'低孕产妇健康优先级\',\'缺乏知识\',\“无效系统\”和\“新妈妈综合征\”。尽管有些担忧,所有支持数据链接,以识别产后妇女,针对心血管疾病风险较大的人群进行干预。与会者概述了数字化和风险预测的潜在好处。突出不同社区的设计和沟通需求。
    结论:英国目前的卫生系统限制导致产后护理欠佳。整合数据链接并改善孕产妇保健数据和数字工具的教育,显示出加强监测和改善未来健康的希望。在简化流程和风险预测方面获得认可,数字工具可以实现更多以人为本的护理计划,解决当前产后护理实践中的差距。
    BACKGROUND: Pregnancy acts as a cardiovascular stress test. Although many complications resolve following birth, women with hypertensive disorder of pregnancy have an increased risk of developing cardiovascular disease (CVD) long-term. Monitoring postnatal health can reduce this risk but requires better methods to identity high-risk women for timely interventions.
    METHODS: Employing a qualitative descriptive study design, focus groups and/or interviews were conducted, separately engaging public contributors and clinical professionals. Diverse participants were recruited through social media convenience sampling. Semi-structured, facilitator-led discussions explored perspectives of current postnatal assessment and attitudes towards linking patient electronic healthcare data to develop digital tools for identifying postpartum women at risk of CVD. Participant perspectives were gathered using post-it notes or a facilitator scribe and analysed thematically.
    RESULTS: From 27 public and seven clinical contributors, five themes regarding postnatal check expectations versus reality were developed, including \'limited resources\', \'low maternal health priority\', \'lack of knowledge\', \'ineffective systems\' and \'new mum syndrome\'. Despite some concerns, all supported data linkage to identify women postnatally, targeting intervention to those at greater risk of CVD. Participants outlined potential benefits of digitalisation and risk prediction, highlighting design and communication needs for diverse communities.
    CONCLUSIONS: Current health system constraints in England contribute to suboptimal postnatal care. Integrating data linkage and improving education on data and digital tools for maternal healthcare shows promise for enhanced monitoring and improved future health. Recognised for streamlining processes and risk prediction, digital tools may enable more person-centred care plans, addressing the gaps in current postnatal care practice.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    商标形象通常是消费者与产品或服务之间的第一种间接联系。公司依靠图形商标作为质量和即时识别的象征,寻求保护他们免受侵犯版权的侵害。一种流行的防御机制是图形搜索,将图像与大型数据库进行比较,以查找与类似商标的潜在冲突。尽管不是一个新的主题,最先进的图像检索在工业产权(IP)领域缺乏可靠的解决方案,数据集的内容实际上不受限制,抽象图像,对人类感知建模是一项具有挑战性的任务。现有的基于内容的图像检索(CBIR)系统仍然存在一些问题,特别是在效率和可靠性方面。在本文中,我们提出了一个新的CBIR系统,克服了这些主要的限制。它遵循模块化的方法,由一组负责检索的单个组件组成,维护和逐步优化商标图像搜索,大规模工作,未标记的数据集。它的泛化能力是使用多个特征描述来实现的,单独加权,并组合以表示单个相似性得分。评估图像的一般特征,边缘地图,和感兴趣的地区,采用基于分水K-Means段的方法。我们提出了一种图像恢复过程,该过程依赖于所有特征描述之间的新相似性度量。每天都会添加新的商标图像,以确保最新的结果。所提出的系统展示了及时的检索速度,95%的搜索具有10秒的呈现速度和93.7%的平均精度,支持其对实字IP保护场景的适用性。
    A trademark\'s image is usually the first type of indirect contact between a consumer and a product or a service. Companies rely on graphical trademarks as a symbol of quality and instant recognition, seeking to protect them from copyright infringements. A popular defense mechanism is graphical searching, where an image is compared to a large database to find potential conflicts with similar trademarks. Despite not being a new subject, image retrieval state-of-the-art lacks reliable solutions in the Industrial Property (IP) sector, where datasets are practically unrestricted in content, with abstract images for which modeling human perception is a challenging task. Existing Content-based Image Retrieval (CBIR) systems still present several problems, particularly in terms of efficiency and reliability. In this paper, we propose a new CBIR system that overcomes these major limitations. It follows a modular methodology, composed of a set of individual components tasked with the retrieval, maintenance and gradual optimization of trademark image searching, working on large-scale, unlabeled datasets. Its generalization capacity is achieved using multiple feature descriptions, weighted separately, and combined to represent a single similarity score. Images are evaluated for general features, edge maps, and regions of interest, using a method based on Watershedding K-Means segments. We propose an image recovery process that relies on a new similarity measure between all feature descriptions. New trademark images are added every day to ensure up-to-date results. The proposed system showcases a timely retrieval speed, with 95% of searches having a 10 second presentation speed and a mean average precision of 93.7%, supporting its applicability to real-word IP protection scenarios.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    探讨深度学习(DL)网络模型在物联网(IoT)数据库查询与优化中的应用效果。本研究首先分析了物联网数据库查询的体系结构,然后探索DL网络模型,最后通过优化策略对DL网络模型进行优化。通过实验验证了本研究中优化模型的优越性。实验结果表明,在模型训练和参数优化阶段,优化后的模型比其他模型具有更高的效率。特别是当数据量为2000时,优化模型的模型训练时间和参数优化时间明显低于传统模型。在资源消耗方面,随着数据量的增加,所有型号的中央处理单元和图形处理单元的使用量以及内存使用量都有所增加。然而,优化后的模型在能耗方面表现出更好的性能。在吞吐量分析中,优化后的模型可以在处理大数据请求时保持较高的事务数和每秒数据量。特别是在4000数据量下,其峰值时间处理能力超过其他型号。关于延迟,尽管所有模型的延迟都随着数据量的增加而增加,优化后的模型在数据库查询响应时间和数据处理延迟方面表现更好。研究结果不仅揭示了优化模型在处理物联网数据库查询及其优化方面的优越性能,而且为物联网数据处理和DL模型优化提供了有价值的参考。这些发现有助于推动DL技术在物联网领域的应用,特别是在需要处理大规模数据和需要高效处理场景的情况下,为相关领域的研究和实践提供了重要的参考。
    To explore the application effect of the deep learning (DL) network model in the Internet of Things (IoT) database query and optimization. This study first analyzes the architecture of IoT database queries, then explores the DL network model, and finally optimizes the DL network model through optimization strategies. The advantages of the optimized model in this study are verified through experiments. Experimental results show that the optimized model has higher efficiency than other models in the model training and parameter optimization stages. Especially when the data volume is 2000, the model training time and parameter optimization time of the optimized model are remarkably lower than that of the traditional model. In terms of resource consumption, the Central Processing Unit and Graphics Processing Unit usage and memory usage of all models have increased as the data volume rises. However, the optimized model exhibits better performance on energy consumption. In throughput analysis, the optimized model can maintain high transaction numbers and data volumes per second when handling large data requests, especially at 4000 data volumes, and its peak time processing capacity exceeds that of other models. Regarding latency, although the latency of all models increases with data volume, the optimized model performs better in database query response time and data processing latency. The results of this study not only reveal the optimized model\'s superior performance in processing IoT database queries and their optimization but also provide a valuable reference for IoT data processing and DL model optimization. These findings help to promote the application of DL technology in the IoT field, especially in the need to deal with large-scale data and require efficient processing scenarios, and offer a vital reference for the research and practice in related fields.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    结论:最近的专有大型语言模型(LLM),例如GPT-4,在应对生物医学领域的各种挑战方面取得了里程碑,从多项选择题到长辈。为了解决使用LLM的编码知识仍然无法处理的挑战,通过从知识语料库中搜索文档并无条件或选择性地将其附加到LLM的输入以进行生成,已经开发了各种检索增强生成(RAG)方法。然而,当将现有方法应用于不同的领域特定问题时,糟糕的泛化变得显而易见,导致获取不正确的文件或做出不准确的判断。在本文中,我们介绍Self-BioRAG,一个可靠的生物医学文本框架,专门用于生成解释,检索特定于域的文档,和自我反思产生的反应。我们利用84k过滤的生物医学指令集来训练Self-BioRAG,它可以使用定制的反射标记来评估其生成的解释。我们的工作证明了特定领域的组件,比如猎犬,领域相关文档语料库,和指令集对于遵守域相关指令是必要的。使用三个主要的医学问答基准数据集,Self-BioRAG的实验结果表明,在参数大小为7B或更小的最先进的开放基础模型上,平均实现了7.2%的绝对改进。同样,Self-BioRAG平均在两个长型问答基准上产生更熟练的答案,比RAG高出8%的Rouge-1分数。总的来说,我们分析Self-BioRAG在问题中找到了线索,如果需要,检索相关文件,并了解如何像医学专家那样使用检索到的文档和编码知识中的信息进行回答。我们发布了用于训练框架组件和模型权重(7B和13B)的数据和代码,以增强生物医学和临床领域的能力。
    方法:Self-BioRAG可在https://github.com/dmis-lab/self-biorag获得。
    CONCLUSIONS: Recent proprietary large language models (LLMs), such as GPT-4, have achieved a milestone in tackling diverse challenges in the biomedical domain, ranging from multiple-choice questions to long-form generations. To address challenges that still cannot be handled with the encoded knowledge of LLMs, various retrieval-augmented generation (RAG) methods have been developed by searching documents from the knowledge corpus and appending them unconditionally or selectively to the input of LLMs for generation. However, when applying existing methods to different domain-specific problems, poor generalization becomes apparent, leading to fetching incorrect documents or making inaccurate judgments. In this paper, we introduce Self-BioRAG, a framework reliable for biomedical text that specializes in generating explanations, retrieving domain-specific documents, and self-reflecting generated responses. We utilize 84k filtered biomedical instruction sets to train Self-BioRAG that can assess its generated explanations with customized reflective tokens. Our work proves that domain-specific components, such as a retriever, domain-related document corpus, and instruction sets are necessary for adhering to domain-related instructions. Using three major medical question-answering benchmark datasets, experimental results of Self-BioRAG demonstrate significant performance gains by achieving a 7.2% absolute improvement on average over the state-of-the-art open-foundation model with a parameter size of 7B or less. Similarly, Self-BioRAG outperforms RAG by 8% Rouge-1 score in generating more proficient answers on two long-form question-answering benchmarks on average. Overall, we analyze that Self-BioRAG finds the clues in the question, retrieves relevant documents if needed, and understands how to answer with information from retrieved documents and encoded knowledge as a medical expert does. We release our data and code for training our framework components and model weights (7B and 13B) to enhance capabilities in biomedical and clinical domains.
    METHODS: Self-BioRAG is available at https://github.com/dmis-lab/self-biorag.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    聚合酶链反应(PCR)扩增广泛用于从DNA存储中检索信息。在PCR扩增过程中,引物的3'末端和DNA序列之间的非特异性配对可以在扩增反应中引起串扰,导致干扰序列的产生和降低的扩增精度。为了解决这个问题,提出了一种高效的PCR扩增信息检索编码算法(ECA-PCRAIR)。该算法采用可变长度扫描和修剪优化来构造码本,该码本在满足传统生物学约束的同时最大化存储密度。随后,基于引物库构建码字搜索树以优化码本,可变长度交织器用于约束检测和校正,从而最大限度地减少非特异性配对的可能性。实验结果表明,ECA-PCRAIR可以将引物3'末端与DNA序列之间的非特异性配对概率降低到2-25%,增强DNA序列的鲁棒性。此外,ECA-PCRAIR的存储密度为每个核苷酸2.14-3.67位(位/nt),显著提高存储容量。
    Polymerase Chain Reaction (PCR) amplification is widely used for retrieving information from DNA storage. During the PCR amplification process, nonspecific pairing between the 3\' end of the primer and the DNA sequence can cause cross-talk in the amplification reaction, leading to the generation of interfering sequences and reduced amplification accuracy. To address this issue, we propose an efficient coding algorithm for PCR amplification information retrieval (ECA-PCRAIR). This algorithm employs variable-length scanning and pruning optimization to construct a codebook that maximizes storage density while satisfying traditional biological constraints. Subsequently, a codeword search tree is constructed based on the primer library to optimize the codebook, and a variable-length interleaver is used for constraint detection and correction, thereby minimizing the likelihood of nonspecific pairing. Experimental results demonstrate that ECA-PCRAIR can reduce the probability of nonspecific pairing between the 3\' end of the primer and the DNA sequence to 2-25%, enhancing the robustness of the DNA sequences. Additionally, ECA-PCRAIR achieves a storage density of 2.14-3.67 bits per nucleotide (bits/nt), significantly improving storage capacity.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:收集的作为常规临床护理一部分的患者的真实世界数据(RWD)形成癌症临床登记的基础。获取准确的死亡数据可能具有挑战性,不准确的生存数据可能会损害基于注册表的研究的完整性。这里,我们探讨了数据链接(DL)与州注册中心的效用,以增强对生存结局的捕获.
    方法:我们从澳大利亚脑肿瘤注册中心:创新与翻译(BRAIN)数据库中确定了在维多利亚州接受治疗的连续成年脑肿瘤患者,在过去6个月内没有记录死亡日期和随访。全名和出生日期用于将BRAIN注册表中的患者与维多利亚出生的患者进行匹配,死亡和婚姻(BDM)登记。比较DL前后的总生存期(OS)结果。
    结果:在7,346例临床登记患者中,5,462(74%)在过去6个月内没有死亡日期和随访记录。在5462名患者中,1,588(29%)与BDM的死亡日期相匹配。与匹配数量增加相关的因素是预后不良的肿瘤,年龄较大,社会劣势。整个队列的OS与DL后相比,DL前显著高估了(DL后前:风险比,1.43;P<.001;中位数,29.9个月对16.7个月),对于大多数个体肿瘤类型。这一发现与肿瘤预后无关。
    结论:通过与BDM的链接显示,在脑癌临床注册中,有很高比例的患者丢失了死亡数据,通过信息审查做出了贡献,膨胀的操作系统计算。应考虑持续向相关注册表进行DL,以确保生存数据的准确报告和RWD结果的解释。
    OBJECTIVE: Real-world data (RWD) collected on patients treated as part of routine clinical care form the basis of cancer clinical registries. Capturing accurate death data can be challenging, with inaccurate survival data potentially compromising the integrity of registry-based research. Here, we explore the utility of data linkage (DL) to state-based registries to enhance the capture of survival outcomes.
    METHODS: We identified consecutive adult patients with brain tumors treated in the state of Victoria from the Brain Tumour Registry Australia: Innovation and Translation (BRAIN) database, who had no recorded date of death and no follow-up within the last 6 months. Full name and date of birth were used to match patients in the BRAIN registry with those in the Victorian Births, Deaths and Marriages (BDM) registry. Overall survival (OS) outcomes were compared pre- and post-DL.
    RESULTS: Of the 7,346 clinical registry patients, 5,462 (74%) had no date of death and no follow-up recorded within the last 6 months. Of the 5,462 patients, 1,588 (29%) were matched with a date of death in BDM. Factors associated with an increased number of matches were poor prognosis tumors, older age, and social disadvantage. OS was significantly overestimated pre-DL compared with post-DL for the entire cohort (pre- v post-DL: hazard ratio, 1.43; P < .001; median, 29.9 months v 16.7 months) and for most individual tumor types. This finding was present independent of the tumor prognosis.
    CONCLUSIONS: As revealed by linkage with BDM, a high proportion of patients in a brain cancer clinical registry had missing death data, contributed to by informative censoring, inflating OS calculations. DL to pertinent registries on an ongoing basis should be considered to ensure accurate reporting of survival data and interpretation of RWD outcomes.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:可以有效筛选和识别符合特定标准的研究的大型语言模型(LLM)将简化文献综述。此外,那些能够从出版物中提取数据的人将通过减轻人类审稿人的负担来增强知识发现。
    方法:我们利用OpenAIGPT-432KAPI版本“2023-05-15”创建了一个自动化管道,以评估LLMGPT-4对有关已发表论文的查询的准确性关于HIV耐药性(HIVDR),无论是否有说明书。说明书包含专门的知识,旨在帮助人们尝试回答有关HIVDR论文的问题。我们设计了60个与HIVDR有关的问题,并在PubMed中创建了60篇已发表的HIVDR论文的降价版本。我们以四种配置向GPT-4提交了60篇论文:(1)同时提出所有60个问题;(2)所有60个问题与说明表同时提出;(3)60个问题中的每个单独提出;(4)60个问题中的每个单独与说明表一起提出。
    结果:GPT-4的平均准确率比替换论文的答案高86.9%-24.0%。总体召回率和准确率分别为72.5%和87.4%,分别。60个问题的三个重复的标准偏差范围为0至5.3%,中位数为1.2%。说明书没有显著提高GPT-4的准确性,召回,或精度。当单独提交60个问题时,与一起提交时相比,GPT-4更有可能提供假阳性答案。
    结论:GPT-4可重复地回答了有关HIVDR的60篇论文的3600个问题,召回,和精度。说明书未能改进这些指标,这表明需要更复杂的方法。增强的即时工程或完善的开源模型可以进一步提高LLM回答有关高度专业化的HIVDR论文的问题的能力。
    BACKGROUND: Large language models (LLMs) that can efficiently screen and identify studies meeting specific criteria would streamline literature reviews. Additionally, those capable of extracting data from publications would enhance knowledge discovery by reducing the burden on human reviewers.
    METHODS: We created an automated pipeline utilizing OpenAI GPT-4 32 K API version \"2023-05-15\" to evaluate the accuracy of the LLM GPT-4 responses to queries about published papers on HIV drug resistance (HIVDR) with and without an instruction sheet. The instruction sheet contained specialized knowledge designed to assist a person trying to answer questions about an HIVDR paper. We designed 60 questions pertaining to HIVDR and created markdown versions of 60 published HIVDR papers in PubMed. We presented the 60 papers to GPT-4 in four configurations: (1) all 60 questions simultaneously; (2) all 60 questions simultaneously with the instruction sheet; (3) each of the 60 questions individually; and (4) each of the 60 questions individually with the instruction sheet.
    RESULTS: GPT-4 achieved a mean accuracy of 86.9% - 24.0% higher than when the answers to papers were permuted. The overall recall and precision were 72.5% and 87.4%, respectively. The standard deviation of three replicates for the 60 questions ranged from 0 to 5.3% with a median of 1.2%. The instruction sheet did not significantly increase GPT-4\'s accuracy, recall, or precision. GPT-4 was more likely to provide false positive answers when the 60 questions were submitted individually compared to when they were submitted together.
    CONCLUSIONS: GPT-4 reproducibly answered 3600 questions about 60 papers on HIVDR with moderately high accuracy, recall, and precision. The instruction sheet\'s failure to improve these metrics suggests that more sophisticated approaches are necessary. Either enhanced prompt engineering or finetuning an open-source model could further improve an LLM\'s ability to answer questions about highly specialized HIVDR papers.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本研究旨在开发一种面向美术馆的数字化检索系统,以解决文化遗产数字化管理中存在的信息不准确、检索效率低的问题。通过引入一种改进的遗传算法,数字管理和访问效率得到提高,为文化遗产数字化管理带来实质性的优化和创新。基于艺术博物馆的收藏,这项研究首先整合了集合的图像,文本,多源智能信息,实现对数字内容的更准确、全面的描述。第二,介绍了GA,提出了一种结合领域知识的遗传算法2卷积神经网络(GA2CNN)优化模型。此外,传统遗传算法的收敛速度,以适应文化遗产数据的特点。最后,卷积神经网络(CNN),GA,与GA2CNN进行了比较,以验证所提出的系统的优越性。结果表明,在所有模型中,样本输出结果\'实际值为2.62,代表真实数据观测结果。对于样本号5,与实际值2.62相比,GA2CNN和GA模型的预测值分别为2.6177和2.6313,其误差分别为0.0023和0.0113。CNN模型的预测值为2.6237,误差为0.0037。可以发现,GA2CNN模型优化后的网络拟合精度较高,预测值与实际值非常接近。集成GA2CNN模型的数字检索系统在提高检索效率和准确性方面具有良好的性能。本研究为文化遗产的数字化组织与展示提供了技术支持,为数字化时代博物馆信息管理的创新探索提供了有价值的参考。
    This study aims to develop a digital retrieval system for art museums to solve the problems of inaccurate information and low retrieval efficiency in the digital management of cultural heritage. By introducing an improved Genetic Algorithm (GA), digital management and access efficiency are enhanced, to bring substantial optimization and innovation to the digital management of cultural heritage. Based on the collection of art museums, this study first integrates the collection\'s images, texts, and metadata with multi-source intelligent information to achieve a more accurate and comprehensive description of digital content. Second, a GA is introduced, and a GA 2 Convolutional Neural Network (GA2CNN) optimization model combining domain knowledge is proposed. Moreover, the convergence speed of traditional GA is improved to adapt to the characteristics of cultural heritage data. Lastly, the Convolutional Neural Network (CNN), GA, and GA2CNN are compared to verify the proposed system\'s superiority. The results show that in all models, the sample output results\' actual value is 2.62, which represents the real data observation results. For sample number 5, compared with the actual value of 2.62, the predicted values of the GA2CNN and GA models are 2.6177 and 2.6313, and their errors are 0.0023 and 0.0113. The CNN model\'s predicted value is 2.6237, with an error of 0.0037. It can be found that the network fitting accuracy after optimization of the GA2CNN model is high, and the predicted value is very close to the actual value. The digital retrieval system integrated with the GA2CNN model has a good performance in enhancing retrieval efficiency and accuracy. This study provides technical support for the digital organization and display of cultural heritage and offers valuable references for innovative exploration of museum information management in the digital era.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号