readability

可读性
  • 文章类型: Journal Article
    目的/背景血清瘤形成是乳腺手术后最常见的并发症。然而,关于这个问题的在线患者教育材料的可读性几乎没有证据。本研究旨在评估相关在线信息的可访问性和可读性。方法对文献进行系统回顾,确定了37个相关网站进行进一步分析。通过使用一系列可读性公式来评估每篇在线文章的可读性。结果所有患者教育材料的Flesch-ReadingEase平均得分为53.9(±21.9),Flesch-Kincaid平均阅读等级为7.32(±3.1),这表明他们“相当困难”阅读,并且高于推荐的阅读水平。结论关于术后乳腺血清肿的在线患者教育材料处于高于公众推荐阅读等级的水平。改善将允许所有患者,不管识字水平如何,获取这些资源,以帮助进行乳房手术的决策。
    Aims/Background Seroma formation is the most common complication following breast surgery. However, there is little evidence on the readability of online patient education materials on this issue. This study aimed to assess the accessibility and readability of the relevant online information. Methods This systematic review of the literature identified 37 relevant websites for further analysis. The readability of each online article was assessed through using a range of readability formulae. Results The average Flesch-Reading Ease score for all patient education materials was 53.9 (± 21.9) and the average Flesch-Kincaid reading grade level was 7.32 (± 3.1), suggesting they were \'fairly difficult\' to read and is higher than the recommended reading level. Conclusion Online patient education materials regarding post-surgery breast seroma are at a higher-than-recommended reading grade level for the public. Improvement would allow all patients, regardless of literacy level, to access such resources to aid decision-making around undergoing breast surgery.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    大多数视神经炎(ON)发生在女性和15至45岁的患者中,这代表了使用互联网寻求健康信息的个人的关键人口。由于临床提供者努力确保患者有可获得的信息来了解他们的病情,评估在线资源的标准至关重要。为了评估质量,内容,问责制,和视神经炎在线信息的可读性。这项横断面研究分析了11个免费提供的医疗站点,并提供了有关视神经炎的信息,并将PubMed用作比较的金标准。十二个问题包括与患者最相关的信息,每个网站由四名神经眼科医生独立检查。使用在线可读性工具分析可读性。美国医学会杂志(JAMA)基准,4项旨在进一步评估健康信息质量的标准被用于评估每个网站的问责制.免费提供在线信息。平均而言,12个问题的48个潜在点(58.3%)中,网站得分27.98(SD±9.93,95%CI24.96-31.00)。不同网站内容的全面性和准确性存在显著差异(p<.001)。网站的平均阅读等级为11.90(SD±2.52,95%CI8.83-15.25)。零网站实现了所有四个JAMA基准。四位神经眼科医生(NO)评审员中的三位之间的观察者间可靠性是稳健的(NO3和NO2之间的ρ=0.77,NO3和NO1之间的ρ=0.91,NO2和NO1之间的ρ=0.74;所有p<.05)。免费提供的详细介绍视神经炎的在线信息的质量因来源而异,有很大的改进空间。所提供的材料难以解释,并且超出了推荐的健康信息阅读水平。审查的大多数网站没有提供有关该疾病非治疗方面的全面信息。应鼓励眼科组织创建更易于公众访问的内容。
    Most cases of optic neuritis (ON) occur in women and in patients between the ages of 15 and 45 years, which represents a key demographic of individuals who seek health information using the internet. As clinical providers strive to ensure patients have accessible information to understand their condition, assessing the standard of online resources is essential. To assess the quality, content, accountability, and readability of online information for optic neuritis. This cross-sectional study analyzed 11 freely available medical sites with information on optic neuritis and used PubMed as a gold standard for comparison. Twelve questions were composed to include the information most relevant to patients, and each website was independently examined by four neuro-ophthalmologists. Readability was analyzed using an online readability tool. Journal of the American Medical Association (JAMA) benchmarks, four criteria designed to assess the quality of health information further were used to evaluate the accountability of each website. Freely available online information. On average, websites scored 27.98 (SD ± 9.93, 95% CI 24.96-31.00) of 48 potential points (58.3%) for the twelve questions. There were significant differences in the comprehensiveness and accuracy of content across websites (p < .001). The mean reading grade level of websites was 11.90 (SD ± 2.52, 95% CI 8.83-15.25). Zero websites achieved all four JAMA benchmarks. Interobserver reliability was robust between three of four neuro-ophthalmologist (NO) reviewers (ρ = 0.77 between NO3 and NO2, ρ = 0.91 between NO3 and NO1, ρ = 0.74 between NO2 and NO1; all p < .05). The quality of freely available online information detailing optic neuritis varies by source, with significant room for improvement. The material presented is difficult to interpret and exceeds the recommended reading level for health information. Most websites reviewed did not provide comprehensive information regarding non-therapeutic aspects of the disease. Ophthalmology organizations should be encouraged to create content that is more accessible to the general public.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    (1)背景:知情同意书的措辞可能会阻碍他们的理解并阻碍患者的自主选择。这项研究的目的是分析西班牙县医院麻醉知情同意书的可读性和可理解性。(2)方法:对将要接受麻醉技术的患者进行描述性和横断面研究。使用INFLESZ工具分析表格的可读性,并使用临时问卷分析其主观理解。(3)结果:分析的表格呈现“有点困难”的可读性。共有44.2%的患者决定不阅读表格,主要是因为他们以前用相同的麻醉技术做过手术。49.5%的患者认为表格中使用的语言不足,53.3%的患者没有完全理解。发现年龄和INFLESZ可读性得分与总体问卷得分呈统计学显着负相关。根据问卷的不同标准,观察到年龄和教育水平之间存在统计学上的显着关联。(4)结论:麻醉知情同意书的可读性低,理解有限。有必要改进他们的措辞,以有利于理解并保证患者的选择自由。
    (1) Background: The wording of informed consent forms could hinder their comprehension and hinder patients\' autonomous choice. The objective of this study was to analyze the readability and comprehension of anesthesia informed consent forms in a Spanish county hospital. (2) Methods: Descriptive and cross-sectional study carried out on patients who were going to undergo anesthetic techniques. The readability of the forms was analyzed using the INFLESZ tool and their subjective comprehension using an ad hoc questionnaire. (3) Results: The analyzed forms presented a \"somewhat difficult\" legibility. A total of 44.2% of the patients decided not to read the form, mainly because they had previously undergone surgery with the same anesthetic technique. The language used in the forms was considered inadequate by 49.5% of the patients and 53.3% did not comprehend it in its entirety. A statistically significant negative correlation of age and INFLESZ readability score with the overall questionnaire score was found. A statistically significant association was observed as a function of age and educational level with the different criteria of the questionnaire. (4) Conclusions: The anesthesia informed consent forms presented low readability with limited comprehension. It would be necessary to improve their wording to favor comprehension and to guarantee patients\' freedom of choice.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • DOI:
    文章类型: Journal Article
    美国国立卫生研究院(NIH)和美国医学会(AMA)建议在线健康信息应以最高6年级的阅读水平编写。目的是利用可读性来评估有关肩关节镜的在线资源,可理解性,和可操作性,使用语法阅读等级和患者教育材料评估工具(PEMAT-P)。
    使用“肩关节镜”进行在线Google™搜索。在前50名结果中,包括针对患者教育的网站。新闻和科学文章,视听材料,行业网站,无关材料被排除。使用客观算法计算可读性:Flesch-Kincaid等级(FKGL),Gobbledygook(SMOG)等级的简单测量,科尔曼-廖氏指数(CLI),和Gunning-Fog指数(GFI)。PEMAT-P用于评估可理解性和可操作性,有70%的分数门槛。不同学术机构的分数进行了比较,私人实践,和商业健康出版商。搜索等级和可读性之间的相关性,可理解性,并计算了可操作性。
    两个独立的搜索产生了53个网站,44(83.02%)符合纳入标准。没有平均可读性得分低于10年级阅读水平。只有一个网站得分在或低于6年级阅读水平。平均可理解性和可操作性得分分别为63.02%±12.09和29.77%±20.63,均未达到PEMAT阈值。12个(27.27%)网站达到可理解性门槛,而没有一个达到可操作性阈值。机构类别在可理解性方面得分相似(61.71%,62.68%,63.67%)在学术上,私人执业,和商业健康出版商(p=0.9536)。没有可读性或PEMAT评分与搜索排名相关。
    在线肩关节镜检查患者教育材料的可读性评分较差,可理解性,和可操作性。一个网站得分达到NIH和AMA推荐阅读水平,27.27%的网站在可理解性方面得分高于70%的PEMAT得分。均未达到可操作性阈值。未来的努力应改善在线资源,以优化患者教育并促进知情决策。证据等级:IV。
    UNASSIGNED: The National Institutes of Health (NIH) and American Medical Association (AMA) recommend that online health information be written at a maximum 6th grade reading level. The aim was to evaluate online resources regarding shoulder arthroscopy utilizing measures of readability, understandability, and actionability, using syntax reading grade level and the Patient Education Materials Assessment Tool (PEMAT-P).
    UNASSIGNED: An online Google™ search utilizing \"shoulder arthroscopy\" was performed. From the top 50 results, websites directed at educating patients were included. News and scientific articles, audiovisual materials, industry websites, and unrelated materials were excluded. Readability was calculated using objective algorithms: Flesch-Kincaid Grade-Level (FKGL), Simple Measure of Gobbledygook (SMOG) grade, Coleman-Liau Index (CLI), and Gunning-Fog Index (GFI). The PEMAT-P was used to assess understandability and actionability, with a 70% score threshold. Scores were compared across academic institutions, private practices, and commercial health publishers. The correlation between search rank and readability, understandability, and actionability was calculated.
    UNASSIGNED: Two independent searches yielded 53 websites, with 44 (83.02%) meeting inclusion criteria. No mean readability score performed below a 10th grade reading level. Only one website scored at or below 6th grade reading level. Mean understandability and actionability scores were 63.02%±12.09 and 29.77%±20.63, neither of which met the PEMAT threshold. Twelve (27.27%) websites met the understandability threshold, while none met the actionability threshold. Institution categories scored similarly in understandability (61.71%, 62.68%, 63.67%) among academic, private practice, and commercial health publishers respectively (p=0.9536). No readability or PEMAT score correlated with search rank.
    UNASSIGNED: Online shoulder arthroscopy patient education materials score poorly in readability, understandability, and actionability. One website scored at the NIH and AMA recommended reading level, and 27.27% of websites scored above the 70% PEMAT score for understandability. None met the actionability threshold. Future efforts should improve online resources to optimize patient education and facilitate informed decision-making. Level of Evidence: IV.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • DOI:
    文章类型: Journal Article
    患者通常在接受诸如腕管松解术(CTR)的择期手术之前访问在线资源进行自我教育。这项研究的目的是评估有关CTR的可用在线资源的可读性客观指标(语法阅读等级),可理解性(以可理解的方式传达关键信息的能力),和可操作性(提供读者可能采取的行动)。
    这项研究进行了两次独立的Google搜索“腕管手术”,并在前50名结果中进行了搜索,分析了旨在对患者进行CTR教育的文章。使用六个不同的指标评估可读性:Flesch-Kincaid等级指数,Flesch阅读轻松,GunningFogIndex,Gobbledygook(SMOG)指数的简单测量,ColemanLiau指数,自动化可读性指数。患者教育材料评估工具以0-100%的量表评估了可理解性和可操作性。斯皮尔曼的相关性评估了这些指标与谷歌搜索排名之间的关系,p<0.05表示有统计学意义。
    在符合纳入标准的39个网站中,平均可读性等级超过9,最低为9.4±1.5(SMOG指数)。可读性与Google搜索排名无关(最低p=0.25)。平均可理解性和可操作性分别为59%±15和26%±24。只有28%的文章使用了视觉辅助,很少有人提供简洁或清晰的摘要,可行的步骤。值得注意的是,较低的年级阅读水平与较高的可操作性分数相关(在几个指标中p≤0.02),但没有可读性指标与可理解性显著相关。Google搜索排名与可理解性或可操作性得分没有显着关联。
    CTR成绩的在线教育材料可读性差,可理解性,和可操作性。质量指标似乎不会影响Google搜索排名。在我们的研究中发现的较差的质量度量分数突出了手提专家需要改善在线患者资源,尤其是在一个强调医疗保健共同决策的时代。证据等级:IV。
    UNASSIGNED: Patients often access online resources to educate themselves prior to undergoing elective surgery such as carpal tunnel release (CTR). The purpose of this study was to evaluate available online resources regarding CTR on objective measures of readability (syntax reading grade-level), understandability (ability to convey key messages in a comprehensible manner), and actionability (providing actions the reader may take).
    UNASSIGNED: The study conducted two independent Google searches for \"Carpal Tunnel Surgery\" and among the top 50 results, analyzed articles aimed at educating patients about CTR. Readability was assessed using six different indices: Flesch-Kincaid Grade Level Index, Flesch Reading Ease, Gunning Fog Index, Simple Measure of Gobbledygook (SMOG) Index, Coleman Liau Index, Automated Readability Index. The Patient Education Materials Assessment Tool evaluated understandability and actionability on a 0-100% scale. Spearman\'s correlation assessed relationships between these metrics and Google search ranks, with p<0.05 indicating statistical significance.
    UNASSIGNED: Of the 39 websites meeting the inclusion criteria, the mean readability grade level exceeded 9, with the lowest being 9.4 ± 1.5 (SMOG index). Readability did not correlate with Google search ranking (lowest p=0.25). Mean understandability and actionability were 59% ± 15 and 26% ± 24, respectively. Only 28% of the articles used visual aids, and few provided concise summaries or clear, actionable steps. Notably, lower grade reading levels were linked to higher actionability scores (p ≤ 0.02 in several indices), but no readability metrics significantly correlated with understandability. Google search rankings showed no significant association with either understandability or actionability scores.
    UNASSIGNED: Online educational materials for CTR score poorly in readability, understandability, and actionability. Quality metrics do not appear to affect Google search rankings. The poor quality metric scores found in our study highlight a need for hand specialists to improve online patient resources, especially in an era emphasizing shared decision-making in healthcare. Level of Evidence: IV.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目标:尽管意见不同,很少有研究研究如何最好地撰写儿科神经心理学报告。方法:这项研究收集了230名家长关于文本难度(阅读水平)和视觉强调(子弹,下划线,斜体)影响报告的可读性和实用性。我们专注于阅读最多的报告部分:摘要/印象。每位家长都对以四种不同样式编写的通用摘要/印象部分的可读性和实用性进行了评分。四种样式通过使用视觉强调(缺席与在场)来克服文本难度(高中与大学)。结果:父母发现文本更简单的版本写得更清楚,更容易遵循,更容易找到信息(p<.001)。父母认为那些文本较硬的人过于详细,复杂,很难理解,并且难以阅读(p<.001)。视觉强调使它更容易找到关键信息和文本更容易遵循和理解——但主要是写在困难的文本版本(交互p≤.026)。在对所有四种样式进行评级后,父母选择了他们的偏好。他们最经常选择的版本写在更简单的文本与视觉强调(p<.001)。结论:研究结果支持使用更简单的文本难度和视觉强调的写作风格。
    Objective: Despite varying opinions, little research has examined how to best write pediatric neuropsychology reports. Method: This study gathered input from 230 parents on how text difficulty (reading level) and visual emphasis (bullets, underline, italics) affect report readability and utility. We focused on the most-read report section: summary/impressions. Each parent rated the readability and usefulness of a generic summary/impressions section written in four different styles. The four styles crossed text difficulty (high school-vs-collegiate) with use of visual emphasis (absent-vs-present). Results: Parents found versions with easier text to be more clearly written, easier to follow, and easier to find information (p<.001). Parents rated those with harder text to be overly detailed, complex, hard to understand, and hard to read (p<.001). Visual emphasis made it easier to find key information and the text easier to follow and understand - but primarily for versions that were written in difficult text (interaction p≤.026). After rating all four styles, parents picked their preference. They most often picked versions written in easier text with visual emphasis (p<.001). Conclusions: Findings support writing styles that use easier text difficulty and visual emphasis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    暂无摘要。
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    确保面向跨性别和性别多样化患者的教育材料易于理解,可以减轻获得性别确认护理和理解术后护理的障碍。这项研究评估了与性别确认阴道成形术相关的在线患者资源的可读性。
    在2023年1月使用两个搜索引擎进行了阴道成形术的在线搜索。使用十项经过验证的可读性测试得出了前十名网站及其相关超链接网页的可读性得分。
    从阴道成形术检索中评估了总共40页。所有带有相关教育材料的网页的平均阅读年级为13.3(即,大学水平),超过美国医学会推荐的6年级阅读水平。
    复杂的患者资源可能会阻碍患者对性别确认阴道成形术的理解。应创建在线患者教育资源,使具有多种阅读理解能力的患者更容易获得。
    UNASSIGNED: Ensuring that educational materials geared toward transgender and gender-diverse patients are comprehensible can mitigate barriers to accessing gender-affirming care and understanding postoperative care. This study evaluates the readability of online patient resources related to gender-affirming vaginoplasty.
    UNASSIGNED: Online searches for vaginoplasty were conducted in January 2023 using two search engines. The readability scores of the top ten websites and their associated hyperlinked webpages were derived using ten validated readability tests.
    UNASSIGNED: A total of 40 pages were assessed from the vaginoplasty searches. The average reading grade level for all the webpages with relevant educational materials was 13.3 (i.e., college level), exceeding the American Medical Association\'s recommended 6th grade reading level.
    UNASSIGNED: Complex patient resources may impede patients\' understanding of gender-affirming vaginoplasty. Online patient education resources should be created that are more accessible to patients with diverse reading comprehension capabilities.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:这项研究检查了ChatGPT作为寻求腺样体切除术指导的父母的准确和可读的信息来源的潜力,扁桃体切除术,和通风管插入手术(ATVtis)。
    方法:ChatGPT的任务是确定父母在互联网搜索引擎上最常见的15个问题,用于三种特定的外科手术。我们从最初的45个问题中删除了重复的问题。随后,我们要求ChatGPT生成其余33个问题的答案。七名经验丰富的耳鼻喉科医师使用四级分级量表分别评估了反应的准确性,从完全不正确到全面。使用Flesch阅读易读性(FRE)和Flesch-Kincaid等级(FKGL)评分确定反应的可读性。问题分为四组:诊断和准备过程,手术信息,风险和并发症,和术后过程。然后根据准确度等级比较响应,FRE,和FKGL得分。
    结果:7名评估者各评估33个AI产生的反应,共提供231项评估。在评估的回答中,167人(72.3%)被归类为“全面”。62个回答(26.8%)被归类为正确但不充分,'和两个回答(0.9%)被评估为'一些正确的,有些不正确。任何评估人员都没有判定任何回应“完全不正确”。FRE和FGKL平均评分分别为57.15(±10.73)和9.95(±1.91),分别。在分析ChatGPT的反应后,3(9.1%)处于或低于美国医学协会(AMA)建议的六年级阅读水平。两组之间的可读性和准确性评分没有显着差异(p>0.05)。
    结论:ChatGPT可以为与ATVtis相关的各种主题的问题提供准确的答案。然而,ChatGPT的答案对某些读者来说可能过于复杂,因为它们通常是在高中阶段写的。这高于AMA为患者信息推荐的六年级阅读水平。根据我们的研究,超过四分之三的人工智能生成的响应处于或高于10年级的阅读水平,引起人们对ChatGPT文本可读性的担忧。
    OBJECTIVE: This study examined the potential of ChatGPT as an accurate and readable source of information for parents seeking guidance on adenoidectomy, tonsillectomy, and ventilation tube insertion surgeries (ATVtis).
    METHODS: ChatGPT was tasked with identifying the top 15 most frequently asked questions by parents on internet search engines for each of the three specific surgical procedures. We removed repeated questions from the initial set of 45. Subsequently, we asked ChatGPT to generate answers to the remaining 33 questions. Seven highly experienced otolaryngologists individually assessed the accuracy of the responses using a four-level grading scale, from completely incorrect to comprehensive. The readability of responses was determined using the Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level (FKGL) scores. The questions were categorized into four groups: Diagnosis and Preparation Process, Surgical Information, Risks and Complications, and Postoperative Process. Responses were then compared based on accuracy grade, FRE, and FKGL scores.
    RESULTS: Seven evaluators each assessed 33 AI-generated responses, providing a total of 231 evaluations. Among the evaluated responses, 167 (72.3 %) were classified as \'comprehensive.\' Sixty-two responses (26.8 %) were categorized as \'correct but inadequate,\' and two responses (0.9 %) were assessed as \'some correct, some incorrect.\' None of the responses were adjudged \'completely incorrect\' by any assessors. The average FRE and FGKL scores were 57.15(±10.73) and 9.95(±1.91), respectively. Upon analyzing the responses from ChatGPT, 3 (9.1 %) were at or below the sixth-grade reading level recommended by the American Medical Association (AMA). No significant differences were found between the groups regarding readability and accuracy scores (p > 0.05).
    CONCLUSIONS: ChatGPT can provide accurate answers to questions on various topics related to ATVtis. However, ChatGPT\'s answers may be too complex for some readers, as they are generally written at a high school level. This is above the sixth-grade reading level recommended for patient information by the AMA. According to our study, more than three-quarters of the AI-generated responses were at or above the 10th-grade reading level, raising concerns about the ChatGPT text\'s readability.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    美国手外科学会和英国手外科学会提供的以患者为中心的信息高于美国医学协会推荐的六年级可读性。促进卫生公平,以患者为中心的内容应针对适当的健康素养水平。人工智能驱动的大型语言模型可能能够帮助手外科学会提高提供给患者的信息的可读性。可读性是对美国手外科学会和英国手外科学会网站上所有用英语撰写的文章进行计算的,就七个最常见的可读性公式而言。聊天生成式预培训变压器版本4(ChatGPT-4)然后被要求以六年级的可读性水平重写每篇文章。计算每个响应的可读性,并与未编辑的文章进行比较。ChatGenerativePre-TrainedTransformer版本4能够提高所有选定可读性公式的可读性,并且根据FleschKincaid等级等级和Gobbledygook计算的简单度量,成功实现了平均六年级的可读性。它增加了平均Flesch阅读轻松分数,更高的分数代表更易读的材料。这项研究表明,ChatGPT-4可用于提高手外科中针对患者的材料的可读性。然而,ChatGPT-4主要对听起来自然感兴趣,而不是寻求真理,因此,每个反应都必须由外科医生进行评估,以确保信息准确性不会因为这个强大的工具的可读性而被牺牲.
    The American Society for Surgery of the Hand and British Society for Surgery of the Hand produce patient-focused information above the sixth-grade readability recommended by the American Medical Association. To promote health equity, patient-focused content should be aimed at an appropriate level of health literacy. Artificial intelligence-driven large language models may be able to assist hand surgery societies in improving the readability of the information provided to patients. The readability was calculated for all the articles written in English on the American Society for Surgery of the Hand and British Society for Surgery of the Hand websites, in terms of seven of the commonest readability formulas. Chat Generative Pre-Trained Transformer version 4 (ChatGPT-4) was then asked to rewrite each article at a sixth-grade readability level. The readability for each response was calculated and compared with the unedited articles. Chat Generative Pre-Trained Transformer version 4 was able to improve the readability across all chosen readability formulas and was successful in achieving a mean sixth-grade readability level in terms of the Flesch Kincaid Grade Level and Simple Measure of Gobbledygook calculations. It increased the mean Flesch Reading Ease score, with higher scores representing more readable material. This study demonstrated that ChatGPT-4 can be used to improve the readability of patient-focused material in hand surgery. However, ChatGPT-4 is interested primarily in sounding natural, and not in seeking truth, and hence, each response must be evaluated by the surgeon to ensure that information accuracy is not being sacrificed for the sake of readability by this powerful tool.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号