Recurrent neural networks

递归神经网络
  • 文章类型: Journal Article
    车载边缘计算(VEC)新兴智能交通系统发展的有希望的范例,可以为车载应用程序提供更低的服务延迟。然而,在具有有限资源的VEC系统中,满足具有严格延迟要求的此类应用的要求仍然是一个挑战。此外,现有的方法集中在处理具有静态分配资源的某个时隙中的卸载任务,但忽略异构任务的不同资源需求,造成资源浪费。为解决VEC系统中实时任务分流和异构资源分配问题,我们提出了一种基于注意力机制和递归神经网络(RNN)的分散解决方案,该解决方案具有多智能体分布式深度确定性策略梯度(AR-MAD4PG)。首先,为了解决代理的部分可观察性,我们构造了一个共享的代理图,并提出了一种周期性的通信机制,使边缘节点能够聚合来自其他边缘节点的信息。第二,为了帮助代理更好地了解当前系统状态,本文设计了一个基于RNN的特征提取网络来捕获VEC系统的历史状态和资源分配信息。第三,为了应对联合观测行动空间过大和信息干扰无效的挑战,我们采用多头注意机制来压缩智能体的观察-动作空间的维度。最后,我们根据实际车辆轨迹建立仿真模型,实验结果表明,我们提出的方法优于现有方法。
    Vehicular edge computing (VEC), a promising paradigm for the development of emerging intelligent transportation systems, can provide lower service latency for vehicular applications. However, it is still a challenge to fulfill the requirements of such applications with stringent latency requirements in the VEC system with limited resources. In addition, existing methods focus on handling the offloading task in a certain time slot with statically allocated resources, but ignore the heterogeneous tasks\' different resource requirements, resulting in resource wastage. To solve the real-time task offloading and heterogeneous resource allocation problem in VEC system, we propose a decentralized solution based on the attention mechanism and recurrent neural networks (RNN) with a multi-agent distributed deep deterministic policy gradient (AR-MAD4PG). First, to address the partial observability of agents, we construct a shared agent graph and propose a periodic communication mechanism that enables edge nodes to aggregate information from other edge nodes. Second, to help agents better understand the current system state, we design an RNN-based feature extraction network to capture the historical state and resource allocation information of the VEC system. Thirdly, to tackle the challenges of excessive joint observation-action space and ineffective information interference, we adopt the multi-head attention mechanism to compress the dimension of the observation-action space of agents. Finally, we build a simulation model based on the actual vehicle trajectories, and the experimental results show that our proposed method outperforms the existing approaches.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    本研究旨在探索利用深度学习技术对排球训练视频进行分类和描述的方法。通过开发集成双向长短期记忆(BiLSTM)和注意力机制的创新模型,参考BiLSTM-多模态注意融合时间分类(BiLSTM-MAFTC),提高了排球视频内容分析的准确性和效率。最初,该模型将来自各种模态的特征编码为特征向量,捕获不同类型的信息,如位置和模态数据。然后使用BiLSTM网络对多模态时间信息进行建模,而空间和渠道注意力机制被纳入以形成双重注意力模块。该模块建立不同模态特征之间的相关性,从每种模态中提取有价值的信息,并发现跨模态的互补信息。大量实验验证了该方法的有效性和最先进的性能。与传统的递归神经网络算法相比,在动作识别的Top-1和Top-5度量下,该模型的识别准确率超过95%,每个视频的识别速度为0.04s。研究表明,该模型能够有效地处理和分析多模态时态信息,包括运动员的动作,在法庭上的位置关系,和球的轨迹。因此,实现了排球训练视频的精确分类和描述。这种进步大大提高了教练员和运动员在排球训练中的效率,并为更广泛的体育视频分析研究提供了宝贵的见解。
    This study aims to explore methods for classifying and describing volleyball training videos using deep learning techniques. By developing an innovative model that integrates Bi-directional Long Short-Term Memory (BiLSTM) and attention mechanisms, referred to BiLSTM-Multimodal Attention Fusion Temporal Classification (BiLSTM-MAFTC), the study enhances the accuracy and efficiency of volleyball video content analysis. Initially, the model encodes features from various modalities into feature vectors, capturing different types of information such as positional and modal data. The BiLSTM network is then used to model multi-modal temporal information, while spatial and channel attention mechanisms are incorporated to form a dual-attention module. This module establishes correlations between different modality features, extracting valuable information from each modality and uncovering complementary information across modalities. Extensive experiments validate the method\'s effectiveness and state-of-the-art performance. Compared to conventional recurrent neural network algorithms, the model achieves recognition accuracies exceeding 95 % under Top-1 and Top-5 metrics for action recognition, with a recognition speed of 0.04 s per video. The study demonstrates that the model can effectively process and analyze multimodal temporal information, including athlete movements, positional relationships on the court, and ball trajectories. Consequently, precise classification and description of volleyball training videos are achieved. This advancement significantly enhances the efficiency of coaches and athletes in volleyball training and provides valuable insights for broader sports video analysis research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    从序列中对蛋白质家族进行分类是蛋白质组学和相关研究中的一项持久任务。已经塑造了许多深度学习模型来应对这一挑战,但由于黑匣子的特点,他们仍然缺乏可靠性。这里,我们提出了一个新的可解释性管道,解释了深度学习模型对真核运动分类的关键决定。在对最前沿的深度学习算法进行比较和实验分析的基础上,选择最佳的深度学习模型CNN-BLSTM将8个真核激酶序列分类为其相应的家族。作为对域中基于CNN的模型的传统基于类激活图的解释的替代,我们已经级联了GRADCAM和集成梯度(IG)的可解释性操作方式,以改善和负责任的结果。为了确保分类器的可信性,我们掩盖了激酶结构域的痕迹,从可解释性管道中确定,并观察到F1评分从0.96降至0.76。遵循可解释的AI范式,我们的研究结果是有希望的,有助于提高生物序列相关研究的深度学习模型的可信度.
    Classification of protein families from their sequences is an enduring task in Proteomics and related studies. Numerous deep-learning models have been moulded to tackle this challenge, but due to the black-box character, they still fall short in reliability. Here, we present a novel explainability pipeline that explains the pivotal decisions of the deep learning model on the classification of the Eukaryotic kinome. Based on a comparative and experimental analysis of the most cutting-edge deep learning algorithms, the best deep learning model CNN-BLSTM was chosen to classify the eight eukaryotic kinase sequences to their corresponding families. As a substitution for the conventional class activation map-based interpretation of CNN-based models in the domain, we have cascaded the GRAD CAM and Integrated Gradient (IG) explainability modus operandi for improved and responsible results. To ensure the trustworthiness of the classifier, we have masked the kinase domain traces, identified from the explainability pipeline and observed a class-specific drop in F1-score from 0.96 to 0.76. In compliance with the Explainable AI paradigm, our results are promising and contribute to enhancing the trustworthiness of deep learning models for biological sequence-associated studies.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    动物行为源于神经元的集体动力学,使它容易受到损害。矛盾的是,许多生物即使在大规模神经损伤后也表现出显著的维持行为的能力。这种极端稳健性的分子基础在很大程度上仍然未知。这里,我们开发了一种定量管道来测量全脑再生过程中涡虫行为的持久潜伏状态。通过将>20,000个动物试验与神经网络建模相结合,我们表明,长程体积肽能信号允许涡虫迅速恢复粗糙的行为输出后,大的扰动到神经系统,而小分子神经调质功能的缓慢恢复提高了精度。这依赖于神经肽和小分子传递的不同时间和长度尺度,以产生竞争性调节行为的神经活动的不连贯模式。通过相反的通信机制控制行为可以创建比单独使用更强大的系统,并且可以作为构建强大神经网络的通用方法。
    Animal behavior emerges from collective dynamics of neurons, making it vulnerable to damage. Paradoxically, many organisms exhibit a remarkable ability to maintain significant behavior even after large-scale neural injury. Molecular underpinnings of this extreme robustness remain largely unknown. Here, we develop a quantitative pipeline to measure long-lasting latent states in planarian flatworm behaviors during whole-brain regeneration. By combining >20,000 animal trials with neural network modeling, we show that long-range volumetric peptidergic signals allow the planarian to rapidly restore coarse behavior output after large perturbations to the nervous system, while slow restoration of small-molecule neuromodulator functions refines precision. This relies on the different time and length scales of neuropeptide and small-molecule transmission to generate incoherent patterns of neural activity that competitively regulate behavior. Controlling behavior through opposing communication mechanisms creates a more robust system than either alone and may serve as a generalizable approach for constructing robust neural networks.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:最近的研究指出,细胞种群在其环境中的动力学和相互作用与免疫学中的几个生物学过程有关。因此,免疫学中的单细胞分析现在依赖于空间组学。此外,最近的文献表明,免疫学情景是分层组织的,包括一些可观察到的对照组和治疗组以不同比例出现的未知细胞行为。这些动态行为在确定炎症等过程的原因中起着至关重要的作用。老化,对抗病原体或癌细胞。在这项工作中,我们使用自监督学习方法来发现这些与免疫学场景中的细胞动力学相关的行为。
    方法:具体来说,我们研究了对照组和治疗组在梗死引起炎症的情况下的不同反应,关注中性粒细胞在血管内的迁移。从一组手工制作的时空特征开始,我们使用递归神经网络来生成嵌入,以正确描述迁移过程的动态。该网络使用一种新颖的多任务对比损失进行训练,一方面,对我们的场景(群体-行为-样本)的层次结构进行建模,另一方面,确保嵌入内的时间一致性,强制从给定细胞获得的后续时间样本在潜在空间中保持接近。
    结果:我们的实验结果表明,所产生的嵌入提高了细胞行为的可分性和治疗的对数可能性,与手工制作的特征提取和最新的最新方法相比,即使降维(16vs.21个手工制作的功能)。
    结论:我们的方法可以在群体水平上进行单细胞分析,能够自动发现不同群体之间的共享行为。这个,反过来,可以根据研究组中的比例来预测治疗效果。
    OBJECTIVE: Recent studies point out that the dynamics and interaction of cell populations within their environment are related to several biological processes in immunology. Hence, single-cell analysis in immunology now relies on spatial omics. Moreover, recent literature suggests that immunology scenarios are hierarchically organized, including unknown cell behaviors appearing in different proportions across some observable control and therapy groups. These dynamic behaviors play a crucial role in identifying the causes of processes such as inflammation, aging, and fighting off pathogens or cancerous cells. In this work, we use a self-supervised learning approach to discover these behaviors associated with cell dynamics in an immunology scenario.
    METHODS: Specifically, we study the different responses of control group and therapy groups in a scenario involving inflammation due to infarct, with a focus on neutrophil migration within blood vessels. Starting from a set of hand-crafted spatio-temporal features, we use a recurrent neural network to generate embeddings that properly describe the dynamics of the migration processes. The network is trained using a novel multi-task contrastive loss that, on the one hand, models the hierarchical structure of our scenario (groups-behaviors-samples) and, on the other, ensures temporal consistency within the embedding, enforcing that subsequent temporal samples obtained from a given cell stay close in the latent space.
    RESULTS: Our experimental results demonstrate that the resulting embeddings improve the separability of cell behaviors and log-likelihood of the therapies, when compared to the hand-crafted feature extraction and recent methods from the state of the art, even with dimensionality reduction (16 vs. 21 hand-crafted features).
    CONCLUSIONS: Our approach enables single-cell analyses at a population level, being able to automatically discover shared behaviors among different groups. This, in turn, enables the prediction of the therapy effectiveness based on their proportions within a study group.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    循环神经网络(RNN),一种机器学习技术,最近在许多领域引起了极大的兴趣,包括流行病学。在流行病学领域实施公共卫生干预措施取决于有效的建模和爆发预测。因为RNN可以捕获数据中的顺序依赖关系,他们已经成为这个领域非常有效的工具。在本文中,研究了RNN在流行病建模中的使用,专注于他们可以在多大程度上处理疾病传播中固有的时间动态。流行病的数学表示需要考虑与时间相关的变量,例如感染的传播速度和干预措施的长期影响。本研究的目标是使用基于RNN的智能计算解决方案,为基于寨卡病毒(SEIRS-PZV)模型传播的SEIR非线性系统提供数值性能和解释。四个病人动力学,即易感患者S(y),医院E(Y)收治的暴露患者,感染性个体的分数I(y),和康复患者R(y),由非线性系统的流行病版本表示,或者SEIR模型。SEIRS-PZV由常微分方程(ODE)表示,然后使用Mathematica软件通过Adams方法求解,以生成数据集。数据集用作RNN的输出,以训练模型并检查结果,例如回归,相关性,误差直方图,等。对于RNN,我们使用100%训练模型有15个隐藏层和2秒的延迟。RNN的输入是从0到5的时间序列,步长为0.05。最后,我们通过在同一图上绘制近似解并为SEIRS-PZV的4种情况中的每一种生成绝对误差图,将近似解与精确解进行了比较。当均方误差(MSE)降低时,模型做出的预测似乎变得更加准确。MSE的下降表明了对观察数据的拟合度增加,这表明模型的预测值和实际值之间的方差正在下降。获得了几乎等于零的最小绝对误差,这进一步支持了建议战略的有用性。一个小的绝对误差表明模型的预测与地面真值的匹配程度,从而表明模型输出的准确性和精度水平。
    Recurrent Neural Networks (RNNs), a type of machine learning technique, have recently drawn a lot of interest in numerous fields, including epidemiology. Implementing public health interventions in the field of epidemiology depends on efficient modeling and outbreak prediction. Because RNNs can capture sequential dependencies in data, they have become highly effective tools in this field. In this paper, the use of RNNs in epidemic modeling is examined, with a focus on the extent to which they can handle the inherent temporal dynamics in the spread of diseases. The mathematical representation of epidemics requires taking time-dependent variables into account, such as the rate at which infections spread and the long-term effects of interventions. The goal of this study is to use an intelligent computing solution based on RNNs to provide numerical performances and interpretations for the SEIR nonlinear system based on the propagation of the Zika virus (SEIRS-PZV) model. The four patient dynamics, namely susceptible patients S(y), exposed patients admitted in a hospital E(y), the fraction of infective individuals I(y), and recovered patients R(y), are represented by the epidemic version of the nonlinear system, or the SEIR model. SEIRS-PZV is represented by ordinary differential equations (ODEs), which are then solved by the Adams method using the Mathematica software to generate a dataset. The dataset was used as an output for the RNN to train the model and examine results such as regressions, correlations, error histograms, etc. For RNN, we used 100% to train the model with 15 hidden layers and a delay of 2 seconds. The input for the RNN is a time series sequence from 0 to 5, with a step size of 0.05. In the end, we compared the approximated solution with the exact solution by plotting them on the same graph and generating the absolute error plot for each of the 4 cases of SEIRS-PZV. Predictions made by the model appeared to be become more accurate when the mean squared error (MSE) decreased. An increased fit to the observed data was suggested by this decrease in the MSE, which suggested that the variance between the model\'s predicted values and the actual values was dropping. A minimal absolute error almost equal to zero was obtained, which further supports the usefulness of the suggested strategy. A small absolute error shows the degree to which the model\'s predictions matches the ground truth values, thus indicating the level of accuracy and precision for the model\'s output.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    这项研究的目的是研究有关企业财务共享和风险识别的方法,以减轻与共享和维护财务数据相关的担忧。最初,该分析检查了传统金融信息共享实践中固有的安全漏洞。随后,引入区块链技术,将集中的企业财务网络中的各种实体节点转变为分散的区块链框架,最终形成了基于区块链的企业财务数据共享模型。同时,该研究将双向长短期记忆(BiLSTM)算法与变压器模型相结合,提出了一种企业财务风险识别模型,称为BiLSTM融合变压器模型。该模型将多模态序列建模与对文本和视觉数据的全面理解相结合。它将财务价值分为1至5级,其中1级表示最有利的财务状况,其次是相对较好的(二级),平均水平(三级),高风险(4级),和严重风险(5级)。在模型构建之后,进行了实验分析,揭示了这一点,与拜占庭容错(BFT)算法机制相比,所提出的模型在节点数为146的情况下实现了超过80的吞吐量。数据消息泄漏和平均丢包率均保持在10%以下。此外,当与递归神经网络(RNN)算法并列时,该模型的风险识别准确率超过94%,AUC值超过0.95,风险识别所需的时间减少约10s。因此,这项研究有助于更精确和有效地识别潜在风险,从而为企业风险管理和战略决策提供关键支持。
    The objective of this study is to investigate methodologies concerning enterprise financial sharing and risk identification to mitigate concerns associated with the sharing and safeguarding of financial data. Initially, the analysis examines security vulnerabilities inherent in conventional financial information sharing practices. Subsequently, blockchain technology is introduced to transition various entity nodes within centralized enterprise financial networks into a decentralized blockchain framework, culminating in the formulation of a blockchain-based model for enterprise financial data sharing. Concurrently, the study integrates the Bi-directional Long Short-Term Memory (BiLSTM) algorithm with the transformer model, presenting an enterprise financial risk identification model referred to as the BiLSTM-fused transformer model. This model amalgamates multimodal sequence modeling with comprehensive understanding of both textual and visual data. It stratifies financial values into levels 1 to 5, where level 1 signifies the most favorable financial condition, followed by relatively good (level 2), average (level 3), high risk (level 4), and severe risk (level 5). Subsequent to model construction, experimental analysis is conducted, revealing that, in comparison to the Byzantine Fault Tolerance (BFT) algorithm mechanism, the proposed model achieves a throughput exceeding 80 with a node count of 146. Both data message leakage and average packet loss rates remain below 10 %. Moreover, when juxtaposed with the recurrent neural networks (RNNs) algorithm, this model demonstrates a risk identification accuracy surpassing 94 %, an AUC value exceeding 0.95, and a reduction in the time required for risk identification by approximately 10 s. Consequently, this study facilitates the more precise and efficient identification of potential risks, thereby furnishing crucial support for enterprise risk management and strategic decision-making endeavors.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    保持在工作记忆中的刺激被感知为向平均刺激收缩。这种收缩偏差在心理物理学中得到了广泛的研究,但对它的神经活动起源知之甚少。通过训练尖峰神经元的循环网络来区分时间间隔,我们探讨了这种偏见的原因,以及行为与人口射击活动的关系。我们发现经过训练的网络表现出动物样的行为。状态空间中神经轨迹的各种几何特征编码了由感官历史调制的第一间隔持续时间的扭曲表示。制定规范模型,我们表明,这些表示传达了区间持续时间的贝叶斯估计,从而将活动和行为联系起来。重要的是,我们的发现表明,贝叶斯计算已经发生在第一刺激的感觉阶段,并在工作记忆中的整个维持过程中持续存在,直到刺激比较的时间。
    A stimulus held in working memory is perceived as contracted toward the average stimulus. This contraction bias has been extensively studied in psychophysics, but little is known about its origin from neural activity. By training recurrent networks of spiking neurons to discriminate temporal intervals, we explored the causes of this bias and how behavior relates to population firing activity. We found that the trained networks exhibited animal-like behavior. Various geometric features of neural trajectories in state space encoded warped representations of the durations of the first interval modulated by sensory history. Formulating a normative model, we showed that these representations conveyed a Bayesian estimate of the interval durations, thus relating activity and behavior. Importantly, our findings demonstrate that Bayesian computations already occur during the sensory phase of the first stimulus and persist throughout its maintenance in working memory, until the time of stimulus comparison.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    循环神经网络(RNN)通过循环连接随时间传输信息。相比之下,生物神经网络使用许多其他时间处理机制。这些机制之一是由轴突特性变化引起的神经元间延迟。最近,此功能在回声状态网络(ESN)中实现,一种RNN,通过为神经元分配空间位置并引入依赖于距离的神经元间延迟。这些延迟被证明显著提高了ESN任务性能。然而,到目前为止,目前还不清楚为什么基于距离的延迟网络(DDNs)比ESNs表现更好。在本文中,我们证明了通过优化节点间延迟,网络的内存容量与任务的内存需求相匹配。因此,网络将其记忆能力集中在过去包含最多信息的点上。此外,我们表明DDN具有更大的总线性存储容量,具有相同数量的非线性处理能力。
    Recurrent neural networks (RNNs) transmit information over time through recurrent connections. In contrast, biological neural networks use many other temporal processing mechanisms. One of these mechanisms is the inter-neuron delays caused by varying axon properties. Recently, this feature was implemented in echo state networks (ESNs), a type of RNN, by assigning spatial locations to neurons and introducing distance-dependent inter-neuron delays. These delays were shown to significantly improve ESN task performance. However, thus far, it is still unclear why distance-based delay networks (DDNs) perform better than ESNs. In this paper, we show that by optimizing inter-node delays, the memory capacity of the network matches the memory requirements of the task. As such, networks concentrate their memory capabilities to the points in the past which contain the most information for the task at hand. Moreover, we show that DDNs have a greater total linear memory capacity, with the same amount of non-linear processing power.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    模型一直是推断分子进化和重建系统发育树的核心。它们的使用通常涉及一个机制框架的发展,反映我们对潜在生物过程的理解,例如核苷酸取代,以及通过最大似然或贝叶斯推断估计模型参数。然而,在复杂的进化场景下,甚至对于大型数据集来说,导出和优化数据的可能性并不总是可能的,往往导致拟合模型中不切实际的简化假设。为了克服这个问题,我们将基因组进化的随机模拟与新的有监督深度学习模型相结合,以推断分子进化的关键参数。我们的模型旨在直接分析多个序列比对,并估计每个位点的进化速率和差异,不需要已知的系统发育树。我们预测的准确性与基于可能性的系统发育推断的准确性相匹配,当速率异质性遵循简单的伽马分布时,但是在更复杂的速率变化模式下,它大大超过了它,如密码子模型。我们的方法具有高度的可扩展性,可以有效地应用于基因组数据,正如我们在来自小丑鱼进化枝的2600万个核苷酸的数据集上显示的那样。我们的模拟还表明,在贝叶斯框架内通过深度学习获得的每个站点速率的整合导致了更多的accu-rate系统发育推断,特别是关于估计的分支长度。因此,我们提出,系统发育分析的未来进步将受益于半监督学习方法,该方法结合了深度学习对替代率的估计,这允许更灵活的速率变化模型,和系统发育树的概率推断,这保证了可解释性和对统计支持的严格评估。
    Models have always been central to inferring molecular evolution and to reconstructing phylogenetic trees. Their use typically involves the development of a mechanistic framework reflecting our understanding of the underlying biological processes, such as nucleotide substitu- tions, and the estimation of model parameters by maximum likelihood or Bayesian inference. However, deriving and optimizing the likelihood of the data is not always possible under complex evolutionary scenarios or even tractable for large datasets, often leading to unrealistic simplifying assumptions in the fitted models. To overcome this issue, we coupled stochastic simulations of genome evolution with a new supervised deep learning model to infer key parameters of molecular evolution. Our model is designed to directly analyze multiple sequence alignments and estimate per-site evolutionary rates and divergence, without requiring a known phylogenetic tree. The accuracy of our predictions matched that of likelihood-based phylogenetic inference, when rate heterogeneity followed a simple gamma distribution, but it strongly exceeded it under more complex patterns of rate variation, such as codon models. Our approach is highly scalable and can be efficiently applied to genomic data, as we showed on a dataset of 26 million nucleotides from the clownfish clade. Our simulations also showed that the integration of per-site rates obtained by deep learning within a Bayesian framework led to significantly more accu- rate phylogenetic inference, particularly with respect to the estimated branch lengths. We thus propose that future advancements in phylogenetic analysis will benefit from a semi-supervised learning approach that combines deep-learning estimation of substitution rates, which allows for more flexible models of rate variation, and probabilistic inference of the phylogenetic tree, which guarantees interpretability and a rigorous assessment of statistical support.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号