speech segmentation

  • 文章类型: Journal Article
    众所周知,节奏在婴儿语言习得中起着重要作用,但是很少有婴儿语言发展研究认为节奏是多模态的,并且显示出言语与身体之间的强烈联系。根据观察,婴儿在听听觉节奏时有时会表现出有节奏的运动反应,本研究询问特定的节奏线索(音高,强度,或持续时间)会系统地增加婴儿自发的有节奏的身体运动,以及他们的节奏动作是否与他们的语音处理能力有关。我们使用了148名德语学习7.5和9.5个月大的婴儿的现有实验和视频数据,测试了他们使用节奏作为语音分割的线索。婴儿熟悉一种人工语言,其音节在音高上交替,强度,持续时间,或者没有这些线索。随后,他们根据感知的节奏对双音节进行了识别测试。我们在视频中注释了婴儿的节奏动作,分析了有节奏的运动持续时间是否取决于感知的节奏提示,并将它们与语音分割性能相关联。结果是,当婴儿听到基于持续时间的语音节奏时,他们的运动参与度最高。此外,我们发现婴儿有节奏的运动反应数量与语音分割之间存在关联。然而,与预测相反,表现出较少节奏动作的婴儿在语音分割方面表现出更成熟的表现。总之,本研究提供了初步的探索性证据,表明婴儿听有节奏的语音时自发的有节奏的身体运动是系统的,并可能与他们的语言处理有关。此外,结果强调需要考虑婴儿自发的有节奏的身体运动作为婴儿听觉和言语感知的个体差异的来源。
    Rhythm is known to play an important role in infant language acquisition, but few infant language development studies have considered that rhythm is multimodal and shows strong connections between speech and the body. Based on the observation that infants sometimes show rhythmic motor responses when listening to auditory rhythms, the present study asked whether specific rhythm cues (pitch, intensity, or duration) would systematically increase infants\' spontaneous rhythmic body movement, and whether their rhythmic movements would be associated with their speech processing abilities. We used pre-existing experimental and video data of 148 German-learning 7.5- and 9.5-month-old infants tested on their use of rhythm as a cue for speech segmentation. The infants were familiarized with an artificial language featuring syllables alternating in pitch, intensity, duration, or none of these cues. Subsequently, they were tested on their recognition of bisyllables based on perceived rhythm. We annotated infants\' rhythmic movements in the videos, analyzed whether the rhythmic moving durations depended on the perceived rhythmic cue, and correlated them with the speech segmentation performance. The result was that infants\' motor engagement was highest when they heard a duration-based speech rhythm. Moreover, we found an association of the quantity of infants\' rhythmic motor responses and speech segmentation. However, contrary to the predictions, infants who exhibited fewer rhythmic movements showed a more mature performance in speech segmentation. In sum, the present study provides initial exploratory evidence that infants\' spontaneous rhythmic body movements while listening to rhythmic speech are systematic, and may be linked with their language processing. Moreover, the results highlight the need for considering infants\' spontaneous rhythmic body movements as a source of individual differences in infant auditory and speech perception.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    4到7个月大的婴儿开始从流利的言语中分割单词形式,这是词汇处理中的一项关键任务。先前的工作已经确定,婴儿依赖于语音信号中可用的各种线索(即,韵律,统计,声学分段,和词法)来完成这个任务。在两个学习法语的6个月和10个月大孩子的实验中,我们使用心理声学方法来检查听觉系统从语音中提取的两个基本声学成分是否以及如何退化,即,时间(频率和幅度调制)和频谱信息,影响分词形式。婴儿熟悉包含目标单词的段落,其中使用声码器将频率调制(FM)信息替换为纯音,而幅度调制(AM)保留在8或16个光谱带中。然后测试婴儿对目标词与新颖对照词的识别。虽然6个月大的孩子无法在这两种情况下进行细分,10个月大的孩子成功了,虽然只在16个光谱带条件下。这些发现表明,6个月的孩子需要FM时间线索来进行语音分割,而10个月的孩子则不需要,尽管它们需要AM提示在足够的光谱带中呈现(即,16).在婴儿中观察到的这种发育变化对光谱时间线索的敏感性可能是由于可用分割程序范围的增加所致。和/或在两个年龄之间的词汇处理中从元音偏移到辅音偏移,因为元音受到我们声学操作的影响更大。研究重点:尽管将语音分段为单词形式对于词汇习得至关重要,婴儿听觉系统提取来处理连续语音的声学信息仍然未知。我们使用语音编码语音检查了婴儿对语音分割中的语谱时间线索的敏感性,并揭示了6到10个月大之间的发育变化。我们展示了调频信息,也就是说,语音的快速时间调制,对于6个月但不是10个月大的婴儿来说,分割单词形式是必要的。此外,减少光谱带的数量会影响10个月大的分割能力,当16个乐队被保留时,谁会成功,但8个波段失败。
    Infants begin to segment word forms from fluent speech-a crucial task in lexical processing-between 4 and 7 months of age. Prior work has established that infants rely on a variety of cues available in the speech signal (i.e., prosodic, statistical, acoustic-segmental, and lexical) to accomplish this task. In two experiments with French-learning 6- and 10-month-olds, we use a psychoacoustic approach to examine if and how degradation of the two fundamental acoustic components extracted from speech by the auditory system, namely, temporal (both frequency and amplitude modulation) and spectral information, impact word form segmentation. Infants were familiarized with passages containing target words, in which frequency modulation (FM) information was replaced with pure tones using a vocoder, while amplitude modulation (AM) was preserved in either 8 or 16 spectral bands. Infants were then tested on their recognition of the target versus novel control words. While the 6-month-olds were unable to segment in either condition, the 10-month-olds succeeded, although only in the 16 spectral band condition. These findings suggest that 6-month-olds need FM temporal cues for speech segmentation while 10-month-olds do not, although they need the AM cues to be presented in enough spectral bands (i.e., 16). This developmental change observed in infants\' sensitivity to spectrotemporal cues likely results from an increase in the range of available segmentation procedures, and/or shift from a vowel to a consonant bias in lexical processing between the two ages, as vowels are more affected by our acoustic manipulations. RESEARCH HIGHLIGHTS: Although segmenting speech into word forms is crucial for lexical acquisition, the acoustic information that infants\' auditory system extracts to process continuous speech remains unknown. We examined infants\' sensitivity to spectrotemporal cues in speech segmentation using vocoded speech, and revealed a developmental change between 6 and 10 months of age. We showed that FM information, that is, the fast temporal modulations of speech, is necessary for 6- but not 10-month-old infants to segment word forms. Moreover, reducing the number of spectral bands impacts 10-month-olds\' segmentation abilities, who succeed when 16 bands are preserved, but fail with 8 bands.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    从语音中提取的声学特征可以帮助诊断神经系统疾病并随时间监测症状。将音频信号时间分割为单个单词是提取声学特征之前所需的重要预处理步骤。机器学习技术可用于经由自动语音识别(ASR)和序列到序列比对来自动化语音分割。虽然最先进的ASR模型在健康语音上取得了良好的表现,当评估构音障碍时,他们的表现会显著下降。对语音受损的ASR模型进行微调可以改善构音障碍患者的表现,但它需要有代表性的临床数据,这很难收集,可能会引起隐私问题。这项研究探讨了使用两种增强方法来提高构音障碍语音的ASR表现的可行性:1)健康个体改变其说话率和响度(通常用于评估病理性语音);2)合成语音的说话率和口音变化(以确保更多样化的声音表示和公平性)。实验评估表明,使用来自这两个来源的数据对预先训练的ASR模型进行微调优于仅对真实临床数据进行微调的模型,并且与对真实临床数据和合成语音的组合进行微调的模型的性能相匹配。当评估来自24名患有各种神经系统疾病的人的保留的声学数据时,表现最好的模型的平均单词错误率为5.7%,平均正确计数准确率为94.4%。在将数据分割成单个单词时,与人工解析(地面实况)相比,平均交集为89.2%.可以得出结论,当微调ASR模型时,仿真和合成增强可以显着减少对构音障碍语音的真实临床数据的需求,反过来,用于语音分割。
    Acoustic features extracted from speech can help with the diagnosis of neurological diseases and monitoring of symptoms over time. Temporal segmentation of audio signals into individual words is an important pre-processing step needed prior to extracting acoustic features. Machine learning techniques could be used to automate speech segmentation via automatic speech recognition (ASR) and sequence to sequence alignment. While state-of-the-art ASR models achieve good performance on healthy speech, their performance significantly drops when evaluated on dysarthric speech. Fine-tuning ASR models on impaired speech can improve performance in dysarthric individuals, but it requires representative clinical data, which is difficult to collect and may raise privacy concerns. This study explores the feasibility of using two augmentation methods to increase ASR performance on dysarthric speech: 1) healthy individuals varying their speaking rate and loudness (as is often used in assessments of pathological speech); 2) synthetic speech with variations in speaking rate and accent (to ensure more diverse vocal representations and fairness). Experimental evaluations showed that fine-tuning a pre-trained ASR model with data from these two sources outperformed a model fine-tuned only on real clinical data and matched the performance of a model fine-tuned on the combination of real clinical data and synthetic speech. When evaluated on held-out acoustic data from 24 individuals with various neurological diseases, the best performing model achieved an average word error rate of 5.7% and a mean correct count accuracy of 94.4%. In segmenting the data into individual words, a mean intersection-over-union of 89.2% was obtained against manual parsing (ground truth). It can be concluded that emulated and synthetic augmentations can significantly reduce the need for real clinical data of dysarthric speech when fine-tuning ASR models and, in turn, for speech segmentation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    婴儿单词发现的计算模型通常对婴儿定向语音语料库的转录起作用。现在可以在语音材料上测试分词的模型,而不是语音的转录。我们建议在测量婴儿从口语句子学习能力的研究中使用的实验刺激的语音上进行这种建模工作。在此类实验中与婴儿结果的对应是婴儿模型的适当基准。我们通过将Algayres及其同事的DP-Parser模型应用于Pelucchi及其同事在婴儿心理语言学实验中使用的听觉刺激来证明这种分析。DP-Parser模型将语音作为输入,并从每个话语创建多个重叠嵌入。预期单词被识别为相似的嵌入片段的集群。这允许将每个话语分割成可能的单词,使用动态规划方法,使组成段的频率最大化。我们表明DP-Parse模仿了美国英语学习者从意大利语句子中提取单词的表现,有利于对音节过渡概率高的单词进行分词。这种对来自婴儿实验的实际刺激的计算分析可能有助于调整未来的模型以匹配人类表现。
    Computational models of infant word-finding typically operate over transcriptions of infant-directed speech corpora. It is now possible to test models of word segmentation on speech materials, rather than transcriptions of speech. We propose that such modeling efforts be conducted over the speech of the experimental stimuli used in studies measuring infants\' capacity for learning from spoken sentences. Correspondence with infant outcomes in such experiments is an appropriate benchmark for models of infants. We demonstrate such an analysis by applying the DP-Parser model of Algayres and colleagues to auditory stimuli used in infant psycholinguistic experiments by Pelucchi and colleagues. The DP-Parser model takes speech as input, and creates multiple overlapping embeddings from each utterance. Prospective words are identified as clusters of similar embedded segments. This allows segmentation of each utterance into possible words, using a dynamic programming method that maximizes the frequency of constituent segments. We show that DP-Parse mimics American English learners\' performance in extracting words from Italian sentences, favoring the segmentation of words with high syllabic transitional probability. This kind of computational analysis over actual stimuli from infant experiments may be helpful in tuning future models to match human performance.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    语言学习者跟踪条件概率,以在连续语音中找到单词,并在模糊的上下文中映射单词和对象。目前还不清楚,然而,学习者是否可以利用语言输入的结构来同时完成这两个任务。为了探索这个问题,我们将语音分割和跨情境单词学习合并为一个任务。在实验1中,当成年人(N=60)同时分割连续语音并将新分割的单词映射到对象时,与单独执行任何一项任务相比,它们表现出更好的性能。然而,当语音流的统计数据相互矛盾时,参与者能够正确地将单词映射到对象,但在语音分割上处于偶然的水平。在实验2中,我们使用更敏感的语音分割措施来发现成年人(N=35),暴露于相同的冲突语音流,正确识别非单词,但仍然无法区分单词和部分单词。再一次,映射高于机会。我们的研究表明,学习者可以跟踪统计信息的多个来源,以查找单词并将其映射到嘈杂环境中的对象。它还提示有关如何有效地衡量这些学习经验所产生的知识的问题。
    Language learners track conditional probabilities to find words in continuous speech and to map words and objects across ambiguous contexts. It remains unclear, however, whether learners can leverage the structure of the linguistic input to do both tasks at the same time. To explore this question, we combined speech segmentation and cross-situational word learning into a single task. In Experiment 1, when adults (N = 60) simultaneously segmented continuous speech and mapped the newly segmented words to objects, they demonstrated better performance than when either task was performed alone. However, when the speech stream had conflicting statistics, participants were able to correctly map words to objects, but were at chance level on speech segmentation. In Experiment 2, we used a more sensitive speech segmentation measure to find that adults (N = 35), exposed to the same conflicting speech stream, correctly identified non-words as such, but were still unable to discriminate between words and part-words. Again, mapping was above chance. Our study suggests that learners can track multiple sources of statistical information to find and map words to objects in noisy environments. It also prompts questions on how to effectively measure the knowledge arising from these learning experiences.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在语音处理过程中,非自闭症成人和婴儿的神经活动跟踪语音包络。最近对成年人的研究表明,这种神经追踪与语言知识有关,在自闭症中可能会减少。如此减少的跟踪,如果已经出现在婴儿期,会阻碍语言发展。在目前的研究中,我们关注有自闭症家族史的儿童,他们经常表现出第一语言习得的延迟。我们调查了婴儿期对儿歌的跟踪差异是否与儿童时期的语言发育和自闭症症状有关。我们评估了10个月或14个月大的语音-大脑相干性,共有22名因家族史而患自闭症的可能性很高的婴儿和19名无自闭症家族史的婴儿。我们分析了这些婴儿在24个月时的语言-大脑连贯性与他们的词汇以及36个月时的自闭症症状之间的关系。我们的结果显示,在10个月和14个月大的婴儿中,语音-大脑具有显着的相干性。我们没有发现语言-大脑一致性与后来的自闭症症状之间存在关系的证据。重要的是,重音音节速率(1-3Hz)中的语音-大脑相干性预测了以后的词汇量。后续分析显示,仅在10个月大的孩子中,而在14个月大的孩子中,追踪与词汇之间存在关系的证据表明,可能性组之间可能存在差异。因此,儿歌的早期跟踪与童年时期的语言发展有关。
    During speech processing, neural activity in non-autistic adults and infants tracks the speech envelope. Recent research in adults indicates that this neural tracking relates to linguistic knowledge and may be reduced in autism. Such reduced tracking, if present already in infancy, could impede language development. In the current study, we focused on children with a family history of autism, who often show a delay in first language acquisition. We investigated whether differences in tracking of sung nursery rhymes during infancy relate to language development and autism symptoms in childhood. We assessed speech-brain coherence at either 10 or 14 months of age in a total of 22 infants with high likelihood of autism due to family history and 19 infants without family history of autism. We analyzed the relationship between speech-brain coherence in these infants and their vocabulary at 24 months as well as autism symptoms at 36 months. Our results showed significant speech-brain coherence in the 10- and 14-month-old infants. We found no evidence for a relationship between speech-brain coherence and later autism symptoms. Importantly, speech-brain coherence in the stressed syllable rate (1-3 Hz) predicted later vocabulary. Follow-up analyses showed evidence for a relationship between tracking and vocabulary only in 10-month-olds but not in 14-month-olds and indicated possible differences between the likelihood groups. Thus, early tracking of sung nursery rhymes is related to language development in childhood.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    变音提示在英语语音分割中的重要作用已被语音学家和心理语言学家广泛认可。然而,非常微薄的调查致力于分析阿拉伯EFL学习者对这些非对比变异形线索的看法。因此,本研究是一种尝试,以检查对同种异体线索的利用,主要是吸气,40名约旦博士生的英语单词结语化和近似消除英语单词结语。此外,它旨在找出在分割过程中更准确地感知到哪些变音线索,以及是否有任何通用语法标记的证据。该实验是通过Altenberg(第二LangRes21:325-358,2005)和Rojczyk等人采用的强制选择识别任务进行的。(ResLang,2016年1:15-29)。方差分析的结果揭示了三种类型的异体线索之间存在统计学上的显着差异,viz.抽吸,声门化和近似消音。这意味着参与者在以声门化为标志的刺激中的表现优于通过抽吸和近似消音。此结果为声门化作为英语语音分割中的边界线索的普遍性提供了进一步的证据。总的来说,约旦博士生未能准确感知变音提示并利用它们来检测单词边界。本调查有可能为教学大纲设计者提供一些建议,和第二/外语教师和学习者。
    The prominent role of allophonic cues in English speech segmentation has widely been recognized by phonologists and psycholinguists. However, very meager inquiry was devoted to analysing the perception of these noncontrastive allophonic cues by Arab EFL learners. Accordingly, the present study is an attempt to examine the exploitation of allophonic cues, mainly aspiration, glottalization and approximant devoicing to English word junctures by 40 Jordanian PhD students. Moreover, it aims to find out which allophonic cues are perceived more accurately during the segmentation process and if there is any evidence for Universal Grammar markedness. The experiment is led through a forced-choice identification task adopted from Altenberg (Second Lang Res 21:325-358, 2005) and Rojczyk et al. (Res Lang 1:15-29, 2016). The results of ANOVA unveiled that there is a statistically significant difference between the three types of allophonic cues, viz. aspiration, glottalization and approximant devoicing. This implies that the participants outperformed in stimuli marked by glottalization than by aspiration and approximant devoicing. This result provided further evidence for the universality of glottalization as a boundary cue in English speech segmentation. Overall, the Jordanian PhD students failed in perceiving the allophonic cues accurately and exploiting them to detect word boundaries. The present inquiry has the potential to provide several recommendations for syllabus designers, and second/foreign language teachers and learners.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    成年人能够在说话者的脸上使用视觉韵律线索来分割语音。此外,眼睛跟踪数据表明,学习者在视觉语音分割过程中会将目光转移到嘴巴上。尽管这些发现表明,在视觉语音分割过程中,嘴巴可能比眼睛或鼻子更容易被看到,没有研究检查个体特征的直接功能重要性;因此,目前尚不清楚哪些视觉韵律提示对分词很重要。在这项研究中,我们研究了首先去除(实验1)然后分离(实验2)个体面部特征对视觉语音分割的影响。除了当视觉显示被限制在眼睛区域时(实验2中的仅眼睛条件),分割性能在所有条件下都高于机会。这表明参与者能够在视觉上接近嘴巴时对语音进行分段,但当嘴巴完全从视觉显示中移除时就无法进行分段,提供证据表明,口腔传达的视觉韵律线索是足够的,并且可能是视觉语音分割所必需的。
    Adults are able to use visual prosodic cues in the speaker\'s face to segment speech. Furthermore, eye-tracking data suggest that learners will shift their gaze to the mouth during visual speech segmentation. Although these findings suggest that the mouth may be viewed more than the eyes or nose during visual speech segmentation, no study has examined the direct functional importance of individual features; thus, it is unclear which visual prosodic cues are important for word segmentation. In this study, we examined the impact of first removing (Experiment 1) and then isolating (Experiment 2) individual facial features on visual speech segmentation. Segmentation performance was above chance in all conditions except for when the visual display was restricted to the eye region (eyes only condition in Experiment 2). This suggests that participants were able to segment speech when they could visually access the mouth but not when the mouth was completely removed from the visual display, providing evidence that visual prosodic cues conveyed by the mouth are sufficient and likely necessary for visual speech segmentation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    帕金森病(PD)是一种进展缓慢的神经退行性疾病,其症状可以在晚期发现。PD的早期诊断和治疗有助于缓解症状和延缓进展。然而,由于PD和其他疾病的症状相似,这是非常具有挑战性的。当前的研究提出了使用手写图像和(或)语音信号诊断PD的通用框架。对于手写图像,在NewHandPD数据集上训练8个通过由AquilaOptimizer调整的迁移学习的预训练卷积神经网络(CNN)以诊断PD。对于语音信号,使用16种特征提取算法对MDVR-KCL数据集中的特征进行数值提取,并将其提供给由网格搜索算法调整的4种不同的机器学习算法,并以图形方式使用5种不同的技术,并提供给8个预训练的CNN结构。作者提出了一种基于可变语音信号段持续时间的分割从语音数据集中提取特征的新技术,即,在分割阶段使用不同的持续时间。使用所提出的技术,生成具有281个数值特征的5个数据集。收集并记录来自不同实验的结果。对于NewHandPD数据集,使用VGG19结构,最好的报告指标是99.75%。对于MDVR-KCL数据集,使用KNN和SVMML算法以及组合的数字特征,最佳报告的指标为99.94%;100%使用组合的mel-specgram图形特征和VGG19结构。这些结果比其他最先进的研究更好。
    Parkinson\'s disease (PD) is a neurodegenerative disorder with slow progression whose symptoms can be identified at late stages. Early diagnosis and treatment of PD can help to relieve the symptoms and delay progression. However, this is very challenging due to the similarities between the symptoms of PD and other diseases. The current study proposes a generic framework for the diagnosis of PD using handwritten images and (or) speech signals. For the handwriting images, 8 pre-trained convolutional neural networks (CNN) via transfer learning tuned by Aquila Optimizer were trained on the NewHandPD dataset to diagnose PD. For the speech signals, features from the MDVR-KCL dataset are extracted numerically using 16 feature extraction algorithms and fed to 4 different machine learning algorithms tuned by Grid Search algorithm, and graphically using 5 different techniques and fed to the 8 pretrained CNN structures. The authors propose a new technique in extracting the features from the voice dataset based on the segmentation of variable speech-signal-segment-durations, i.e., the use of different durations in the segmentation phase. Using the proposed technique, 5 datasets with 281 numerical features are generated. Results from different experiments are collected and recorded. For the NewHandPD dataset, the best-reported metric is 99.75% using the VGG19 structure. For the MDVR-KCL dataset, the best-reported metrics are 99.94% using the KNN and SVM ML algorithms and the combined numerical features; and 100% using the combined the mel-specgram graphical features and VGG19 structure. These results are better than other state-of-the-art researches.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    对于母语不使用持续时间的第二语言学习者来说,数量语言的语音持续时间差异可能是有问题的。最近的研究发现,与使用听音和重复训练相比,非本地元音持续时间的处理有所改善,当前的研究探讨了类似方法对辅音持续时间对比的功效。18名成年参与者接受了为期两天的聆听和重复训练,其中包含暗音或停止辅音对比的伪词刺激。结果用心理生理事件相关电位(错配阴性和P3)检查,行为歧视测试和生产任务。结果显示,在事件相关电位或生产任务中没有与培训相关的影响,但行为歧视表现有所改善。此外,两种辅音类型的处理之间出现了差异。研究结果表明,停止辅音的处理速度比sibilant慢,并就可能的分割困难讨论了调查结果。
    Phonological duration differences in quantity languages can be problematic for second language learners whose native language does not use duration contrastively. Recent studies have found improvement in the processing of non-native vowel duration contrasts with the use of listen-and-repeat training, and the current study explores the efficacy of similar methodology on consonant duration contrasts. 18 adult participants underwent two days of listen-and-repeat training with pseudoword stimuli containing either a sibilant or a stop consonant contrast. The results were examined with psychophysiological event-related potentials (mismatch negativity and P3), behavioral discrimination tests and a production task. The results revealed no training-related effects in the event-related potentials or the production task, but behavioral discrimination performance improved. Furthermore, differences emerged between the processing of the two consonant types. The findings suggest that stop consonants are processed more slowly than the sibilants, and the findings are discussed with regard to possible segmentation difficulties.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号