Plagiarism

抄袭
  • 文章类型: Journal Article
    引言抄袭是挪用另一个人的想法,单词,结果,或过程没有给予适当的信用,通常声称他们是自己的。因此,抄袭是一种不诚实的欺诈或欺骗行为。目标这项研究的目的是评估医学研究生(PG)学生对抄袭的看法。材料与方法:在论文分析开始之前,通过在剽窃和数据分析的定向会议之后,使用测试前和测试后的问卷,对二年级PG学生进行了关于剽窃感知的教育观察研究。问题包括对剽窃的认识和态度。结果一项涉及91名PG学生的调查评估了他们对剽窃的理解。值得注意的是,大多数(97.7%)表现出剽窃意识,然而,只有18.6%的人发表过一篇发表的文章。发现大约30%的学生在学术追求中的某个时候采取了剽窃。大约70.9%的PG学生熟悉大学的剽窃政策。调查强调PG学生的剽窃意识显着提高,他们对剽窃的态度在参加会议后不断演变。结论通过实施严格的指导方针可以避免抄袭。确保严格遵守政策,并在开始工作前提供全面的培训。培训,再培训,严格的机构政策将有助于提高对抄袭的认识,并减少科学写作中抄袭的比例。
    Introduction Plagiarism is appropriating another person\'s ideas, words, results, or processes without giving appropriate credit and usually claiming them to be one\'s own. Thus, plagiarism is a dishonest act of fraud or cheating. Objectives The objective of this study is to assess the perception of plagiarism among medical postgraduate (PG) students. Materials & Methods: An educational observational study was conducted among second-year PG students about the perception of plagiarism by using pre-test and post-test questionnaires after an orientation session on plagiarism and data analysis before the start of dissertation analysis. Questions included were on awareness and attitude towards plagiarism.  Results A survey involving 91 PG students assessed their understanding of plagiarism. Remarkably, the majority (97.7%) demonstrated awareness of plagiarism, yet only 18.6% had authored a published article. It was discovered that about 30% of the students had resorted to plagiarism at some point during their academic pursuits. Approximately 70.9% of the PG students were acquainted with the University\'s plagiarism policy. The survey highlighted a notable enhancement in plagiarism awareness among PG students, with their attitudes toward plagiarism evolving after participating in the session. Conclusion Plagiarism can be avoided by implementing rigorous guidelines, ensuring strict policy adherence, and providing comprehensive training before commencing work. Training, retraining, and strict institute policies will help increase awareness about plagiarism and reduce the percentage of plagiarism in scientific writing.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Editorial
    暂无摘要。
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    OpenAI对ChatGPT的引入引起了极大的关注。在其能力中,释义突出。
    本研究旨在调查该聊天机器人产生的释义文本中剽窃的令人满意的水平。
    向ChatGPT提交了三个不同长度的文本。然后指示ChatGPT使用五个不同的提示来解释所提供的文本。在研究的后续阶段,案文分为不同的段落,ChatGPT被要求单独解释每个段落。最后,在第三阶段,ChatGPT被要求解释它以前生成的文本。
    ChatGPT生成的文本中的平均抄袭率为45%(SD10%)。ChatGPT在提供的文本中表现出抄袭的大幅减少(平均差异-0.51,95%CI-0.54至-0.48;P<.001)。此外,当将第二次尝试与初始尝试进行比较时,抄袭率显着下降(平均差-0.06,95%CI-0.08至-0.03;P<.001)。文本中的段落数量表明与抄袭的百分比有值得注意的关联,由单个段落组成的文本表现出最低的抄袭率(P<.001)。
    尽管ChatGPT显著减少了文本中的抄袭,现有的抄袭水平仍然相对较高。这突显了研究人员在将这种聊天机器人纳入他们的工作时的关键谨慎。
    UNASSIGNED: The introduction of ChatGPT by OpenAI has garnered significant attention. Among its capabilities, paraphrasing stands out.
    UNASSIGNED: This study aims to investigate the satisfactory levels of plagiarism in the paraphrased text produced by this chatbot.
    UNASSIGNED: Three texts of varying lengths were presented to ChatGPT. ChatGPT was then instructed to paraphrase the provided texts using five different prompts. In the subsequent stage of the study, the texts were divided into separate paragraphs, and ChatGPT was requested to paraphrase each paragraph individually. Lastly, in the third stage, ChatGPT was asked to paraphrase the texts it had previously generated.
    UNASSIGNED: The average plagiarism rate in the texts generated by ChatGPT was 45% (SD 10%). ChatGPT exhibited a substantial reduction in plagiarism for the provided texts (mean difference -0.51, 95% CI -0.54 to -0.48; P<.001). Furthermore, when comparing the second attempt with the initial attempt, a significant decrease in the plagiarism rate was observed (mean difference -0.06, 95% CI -0.08 to -0.03; P<.001). The number of paragraphs in the texts demonstrated a noteworthy association with the percentage of plagiarism, with texts consisting of a single paragraph exhibiting the lowest plagiarism rate (P<.001).
    UNASSIGNED: Although ChatGPT demonstrates a notable reduction of plagiarism within texts, the existing levels of plagiarism remain relatively high. This underscores a crucial caution for researchers when incorporating this chatbot into their work.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在大型语言模型出现后,人工智能生成的文本的广泛使用最近达到了顶峰。虽然使用AI文本生成器,比如ChatGPT,是有益的,它也威胁到学术水平,因为学生可能会诉诸于它。在这项工作中,我们提出了一种利用文档的内在样式特征来检测基于ChatGPT的抄袭的技术。样式特征被归一化并馈送给经典分类器,比如k-最近的邻居,决策树,和朴素贝叶斯,以及集成分类器,例如XGBoost和Stacking。通过使用交叉折叠验证对分类器进行彻底检查,超参数调整,和多次训练迭代。结果表明,经典和集成学习分类器在区分人类和ChatGPT写作风格方面的功效,具有值得注意的XGBoost性能,其中准确率达到100%,召回,和精度指标。此外,在相同的数据集和相同的分类器上,所提出的XGBoost分类器优于最新的结果,突出了所提出的特征样式提取方法优于TF-IDF技术。集成学习分类器也被应用于具有混合文本的生成数据集,其中段落由ChatGPT和人类编写。结果表明,98%的文件被正确地分类为混合或人类。最后的贡献在于单个文档段落的作者身份归属,其中准确性达到92.3%。
    Extensive use of AI-generated texts culminated recently after the advent of large language models. Although the use of AI text generators, such as ChatGPT, is beneficial, it also threatens the academic level as students may resort to it. In this work, we propose a technique leveraging the intrinsic stylometric features of documents to detect ChatGPT-based plagiarism. The stylometric features were normalized and fed to classical classifiers, such as k-Nearest Neighbors, Decision Tree, and Naïve Bayes, as well as ensemble classifiers, such as XGBoost and Stacking. A thorough examination of the classifier was conducted by using Cross-Fold validation, hyperparameters tuning, and multiple training iterations. The results show the efficacy of both classical and ensemble learning classifiers in distinguishing between human and ChatGPT writing styles with a noteworthy performance of XGBoost where 100 % was achieved for accuracy, recall, and precision metrics. Moreover, the proposed XGBoost classifier outperformed the state-of-the-art result on the same dataset and same classifier highlighting the superiority of the proposed feature style extraction method over TF-IDF techniques. The ensemble learning classifiers were also applied to the generated dataset with mixed texts, where paragraphs are written by ChatGPT and humans. The results show that 98 % of the documents were classified correctly as either mixed or human. The last contribution consists in the authorship attribution of the paragraphs of a single document where the accuracy reached 92.3 %.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人工智能(AI)技术在科学研究中的应用大大提高了效率和准确性,但也引入了新形式的学术不端行为,例如使用AI算法进行数据制造和文本抄袭。这些做法危及研究完整性,并可能误导科学方向。这项研究解决了这些挑战,强调学术界需要加强道德规范,提高研究人员资格,建立严格的审查机制。确保负责和透明的研究过程,我们建议采取以下具体关键行动:制定和执行全面的人工智能研究完整性指南,其中包括在数据分析和发布中使用人工智能的明确协议,确保人工智能辅助研究的透明度和问责制。为研究人员实施强制性AI道德和诚信培训,旨在促进对潜在人工智能滥用的深入理解,并促进伦理研究实践。建立国际合作框架,促进最佳实践交流和制定人工智能研究的统一伦理标准。保护研究完整性对于维护公众对科学的信任至关重要,使这些建议迫切需要科学界的考虑和行动。
    The application of artificial intelligence (AI) technologies in scientific research has significantly enhanced efficiency and accuracy but also introduced new forms of academic misconduct, such as data fabrication and text plagiarism using AI algorithms. These practices jeopardize research integrity and can mislead scientific directions. This study addresses these challenges, underscoring the need for the academic community to strengthen ethical norms, enhance researcher qualifications, and establish rigorous review mechanisms. To ensure responsible and transparent research processes, we recommend the following specific key actions: Development and enforcement of comprehensive AI research integrity guidelines that include clear protocols for AI use in data analysis and publication, ensuring transparency and accountability in AI-assisted research. Implementation of mandatory AI ethics and integrity training for researchers, aimed at fostering an in-depth understanding of potential AI misuses and promoting ethical research practices. Establishment of international collaboration frameworks to facilitate the exchange of best practices and development of unified ethical standards for AI in research. Protecting research integrity is paramount for maintaining public trust in science, making these recommendations urgent for the scientific community consideration and action.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    文学是作者的一种表达,历史上,它的商品化主要在作者信用方面赋予了它价值。可以说是在不归因于必要来源的情况下复制已发布的内容,被称为剽窃在道德上抹黑了这一前提。然而,在当今的数字氛围中,仅仅根据数字分配的语义相似性来衡量其比例可能并不完全合理。应该注意的是,虽然技术可以促进剽窃检测,通过提供更多访问已发布内容的方式进行数字化也是抄袭的促进者。虽然科学界对剽窃行为的态度往往很严厉,同样的行为准则仍然缺乏明确性,因为有几个灰色地带与这种不当行为有关,法律对此保持沉默。通过重新审视作者身份和版权法的历史演变,这篇文章从不同的角度提出了与剽窃有关的分析视野。通过找出当今处理这些古老概念的差距,人们可能会发现,在考虑到数字环境的情况下,需要重新审视处理剽窃案件的法律方面。
    Literature being an expression of an author, its commodification historically has assigned a value to it primarily in terms of authorship credit. Arguably reproducing published content without attributing the requisite source, termed as plagiarism is ethically discrediting to this premise. However, simply weighing its proportion based on digitally assigned semantic similarity may not be completely justifiable in the present-day digital atmosphere. It should be noted that while technology can facilitate plagiarism detection, digitization by way of providing greater access to published content is also the facilitator of plagiarism. While the scientific community is often severe in its approach toward the act of plagiarism, there is still a lack of clarity around the code of conduct of the same as there are several grey areas related to such a misconduct on which the law remains silent. By revisiting the historical evolution of the credit of authorship and the copyright law this piece presents an analytical vista pertaining to plagiarism in a different light. By identifying the gaps in the present-day handling of these age-old concepts, one may find that there is an unmet need to revisit the legal aspects of handling cases of plagiarism taking into consideration the digital environment.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:由于人工智能(AI)的最新进展,语言模型应用程序可以生成逻辑文本输出,很难与人类写作区分开。ChatGPT(OpenAI)和Bard(随后更名为“Gemini”;GoogleAI)是使用不同的方法开发的,但是关于它们产生摘要的能力差异的研究很少。在脊柱外科领域使用AI撰写科学摘要是许多争论和争议的中心。
    目的:本研究的目的是评估由ChatGPT和Bard生成的结构化摘要与人类撰写的摘要在脊柱外科领域的可重复性。
    方法:总共,从7种著名期刊中随机选择60篇涉及脊柱部分的摘要,并用作ChatGPT和Bard输入语句,以根据提供的论文标题生成摘要。共174篇摘要,分为人类撰写的摘要,ChatGPT生成的摘要,和Bard生成的摘要,对期刊指南的结构化格式和内容的一致性进行了评估。使用iThenticate和ZeroGPT程序评估抄袭和AI输出的可能性,分别。脊柱领域共有8位评审员评估了30篇随机提取的摘要,以确定它们是由AI还是人类作者制作的。
    结果:ChatGPT摘要中符合期刊格式指南的摘要比例(34/60,56.6%)高于Bard产生的摘要(6/54,11.1%;P<.001)。然而,与ChatGPT摘要(30/60,50%;P<.001)相比,Bard摘要的字数符合期刊指南的比例更高(49/54,90.7%)。ChatGPT生成的摘要的相似性指数(20.7%)显著低于Bard生成的摘要(32.1%;P<.001)。AI检测程序预测,21.7%(13/60)的人类群体,ChatGPT组的63.3%(38/60),Bard组的87%(47/54)可能是由人工智能产生的,曲线下面积值为0.863(P<.001)。人类评审员的平均检出率为53.8%(SD11.2%),灵敏度为56.3%,特异性为48.4%。共有56.3%(63/112)的实际人类撰写的摘要和55.9%(62/128)的人工智能生成的摘要被认为是人类撰写的和人工智能生成的。分别。
    结论:ChatGPT和Bard都可以用来帮助编写摘要,但大多数人工智能生成的摘要目前被认为是不道德的,因为抄袭和人工智能检测率很高。ChatGPT生成的摘要在满足期刊格式指南方面似乎优于Bard生成的摘要。因为人类无法准确区分人类编写的摘要和人工智能程序产生的摘要,至关重要的是要特别谨慎,并检查使用AI程序的道德界限,包括ChatGPT和Bard.
    BACKGROUND: Due to recent advances in artificial intelligence (AI), language model applications can generate logical text output that is difficult to distinguish from human writing. ChatGPT (OpenAI) and Bard (subsequently rebranded as \"Gemini\"; Google AI) were developed using distinct approaches, but little has been studied about the difference in their capability to generate the abstract. The use of AI to write scientific abstracts in the field of spine surgery is the center of much debate and controversy.
    OBJECTIVE: The objective of this study is to assess the reproducibility of the structured abstracts generated by ChatGPT and Bard compared to human-written abstracts in the field of spine surgery.
    METHODS: In total, 60 abstracts dealing with spine sections were randomly selected from 7 reputable journals and used as ChatGPT and Bard input statements to generate abstracts based on supplied paper titles. A total of 174 abstracts, divided into human-written abstracts, ChatGPT-generated abstracts, and Bard-generated abstracts, were evaluated for compliance with the structured format of journal guidelines and consistency of content. The likelihood of plagiarism and AI output was assessed using the iThenticate and ZeroGPT programs, respectively. A total of 8 reviewers in the spinal field evaluated 30 randomly extracted abstracts to determine whether they were produced by AI or human authors.
    RESULTS: The proportion of abstracts that met journal formatting guidelines was greater among ChatGPT abstracts (34/60, 56.6%) compared with those generated by Bard (6/54, 11.1%; P<.001). However, a higher proportion of Bard abstracts (49/54, 90.7%) had word counts that met journal guidelines compared with ChatGPT abstracts (30/60, 50%; P<.001). The similarity index was significantly lower among ChatGPT-generated abstracts (20.7%) compared with Bard-generated abstracts (32.1%; P<.001). The AI-detection program predicted that 21.7% (13/60) of the human group, 63.3% (38/60) of the ChatGPT group, and 87% (47/54) of the Bard group were possibly generated by AI, with an area under the curve value of 0.863 (P<.001). The mean detection rate by human reviewers was 53.8% (SD 11.2%), achieving a sensitivity of 56.3% and a specificity of 48.4%. A total of 56.3% (63/112) of the actual human-written abstracts and 55.9% (62/128) of AI-generated abstracts were recognized as human-written and AI-generated by human reviewers, respectively.
    CONCLUSIONS: Both ChatGPT and Bard can be used to help write abstracts, but most AI-generated abstracts are currently considered unethical due to high plagiarism and AI-detection rates. ChatGPT-generated abstracts appear to be superior to Bard-generated abstracts in meeting journal formatting guidelines. Because humans are unable to accurately distinguish abstracts written by humans from those produced by AI programs, it is crucial to exercise special caution and examine the ethical boundaries of using AI programs, including ChatGPT and Bard.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:目的:研究批判性思维的态度,学术诚信和乌克兰医学博士生的人工智能使用。
    方法:材料和方法:2023年,来自Bogomolet国立医科大学的56名医学博士学位学生,基辅,乌克兰,接受了调查。参与是自愿的,经口头同意。调查问题中包含的数据包括与批判性思维有关的各个方面,分析技能,以及对剽窃的态度。
    结果:结果:绝大多数医学博士研究生(75%)高度重视批判性思维。虽然大多数(89.29%)在英语学习中应用分析和批判性思维技能,有一个显著的百分比(7.14%)是不确定的。尽管大多数人意识到作弊和抄袭的不可接受(75%),一小部分人承认抄袭(12.5%)。只有30.4%的受访者表示使用GPT聊天进行研究。对目睹同龄人剽窃或使用人工智能的反应表现出不同的态度,许多人表示不愿意报告此类事件(30.36%)。
    结论:结论:该调查强调了医学生在学术研究中批判性思维的公认重要性,同时也指出了可以改善对这些技能的态度和实践的领域。这项研究表明,在学术诚信方面有很大的改进空间,几乎三分之一的受访者需要更明确的标准。这肯定给目前的医学研究生教育提出了一些问题,需要改变教育范式,明确的学术行为规则,和控制系统。
    OBJECTIVE: Aim: The paper studies the attitude to critical thinking, academic integrity and the Artificial Intelligence use of the Ukrainian medical PhD students.
    METHODS: Materials and Methods: In 2023, 56 medical PhD students from the Bogomolets National Medical University, Kyiv, Ukraine, underwent the survey. The participation was voluntary, upon the oral consent. The data included in the survey questions include various aspects related to critical thinking, analysis skills, and attitudes towards plagiarism.
    RESULTS: Results: A significant majority of the medical PhD students (75%) place high importance on critical thinking. While a majority (89.29%) apply analysis and critical thinking skills in their English studies, there\'s a notable percentage (7.14%) that is uncertain. Although most are aware of the unacceptability of cheating and plagiarism (75%), a small proportion admit to having plagiarized (12.5%). Only 30.4% of the respondents reported using GPT Chat for study. Responses to witnessing peers plagiarize or using Artificial Intelligence show a varied attitude, with many expressing unwillingness to report such incidents (30.36%).
    CONCLUSIONS: Conclusions: The survey highlights the recognized importance of critical thinking in academic study among medical PhD students, while also points to areas where attitudes and practices regarding these skills could be improved. The study shows a vast area for improvement regarding academic integrity, as almost one-third of respondents need more defined standards. This definitely puts some questions before the present medical postgraduate education, and requires change of the educational paradigm, clear rules of academic conduct, and a system of control.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    暂无摘要。
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:ChatGPT,公开可用的人工智能(AI)大型语言模型,允许按需使用复杂的AI技术。的确,ChatGPT的使用已经开始进入医学研究。然而,医学界还没有理解人工智能在这种背景下的能力和伦理考虑,关于ChatGPT的写作能力存在未知因素,准确度,以及对作者身份的影响。
    目的:我们假设人类审稿人和AI检测软件在正确识别妇科和妇科科目中的原始已发表摘要和AI撰写摘要的能力方面存在差异。我们还怀疑写作错误的具体差异,可读性,感知的写作质量存在于原始文本和人工智能生成的文本之间。
    方法:选择发表在高影响力医学期刊上的25篇文章以及一系列妇科和泌尿外科期刊。ChatGPT被提示编写25个对应的AI生成的摘要,提供抽象标题,期刊规定的抽象要求,并选择原始结果。原始和AI生成的摘要由盲法妇科和泌尿妇科的教职员工和研究员进行审查,以将写作识别为原始或AI生成。所有摘要均通过公开可用的AI检测软件GPTZero进行分析,独创性,和Copyleaks,并由AI写作助理Grammarly评估了写作错误和质量。
    结果:26名教职员工和4名研究员对25份原创和25份人工智能生成的摘要进行了一百五十七份评论。正确识别了57%的原始摘要和42.3%的AI生成摘要,在所有摘要中平均为49.7%。所有三个AI检测器都将原始摘要评为AI编写的可能性低于ChatGPT生成的摘要(GPTZero5.8对73.3%,p<0.001;原创性10.9比98.1%,p<0.001;Copyleaks18.6vs58.2%,p<0.001)。在分析所有摘要时,三个AI检测软件的性能不同(p=0.03),原始摘要(p<0.001),和人工智能生成的摘要(p<0.001)。与AI摘要相比,语法文本分析发现了更多的原始写作问题和正确性错误,包括较低的语法分数反映较差的写作质量(82.3vs88.1,p=0.006),更多的写作问题(19.2对12.8,p<0.001),关键问题(5.4vs1.3,p<0.001),混淆词(0.8vs0.1,p=0.006),拼写错误的单词(1.7vs0.6,p=0.02),不正确的确定器使用(1.2对0.2,p=0.002),和逗号误用(0.3vs0.0,p=0.005)。
    结论:由于AI能够生成非常真实的文本,人类评论者无法检测到人类和ChatGPT生成的科学写作之间的细微差别。AI检测软件改善了对AI生成的写作的识别,但仍然缺乏完全的准确性,并且需要进行编程改进以实现最佳检测。由于审阅者和编辑可能无法可靠地检测AI生成的片段,随着AI聊天机器人获得更广泛的使用,需要建立明确的指南来报告作者使用AI的情况并在审查过程中实施AI检测软件。
    ChatGPT, a publicly available artificial intelligence large language model, has allowed for sophisticated artificial intelligence technology on demand. Indeed, use of ChatGPT has already begun to make its way into medical research. However, the medical community has yet to understand the capabilities and ethical considerations of artificial intelligence within this context, and unknowns exist regarding ChatGPT\'s writing abilities, accuracy, and implications for authorship.
    We hypothesize that human reviewers and artificial intelligence detection software differ in their ability to correctly identify original published abstracts and artificial intelligence-written abstracts in the subjects of Gynecology and Urogynecology. We also suspect that concrete differences in writing errors, readability, and perceived writing quality exist between original and artificial intelligence-generated text.
    Twenty-five articles published in high-impact medical journals and a collection of Gynecology and Urogynecology journals were selected. ChatGPT was prompted to write 25 corresponding artificial intelligence-generated abstracts, providing the abstract title, journal-dictated abstract requirements, and select original results. The original and artificial intelligence-generated abstracts were reviewed by blinded Gynecology and Urogynecology faculty and fellows to identify the writing as original or artificial intelligence-generated. All abstracts were analyzed by publicly available artificial intelligence detection software GPTZero, Originality, and Copyleaks, and were assessed for writing errors and quality by artificial intelligence writing assistant Grammarly.
    A total of 157 reviews of 25 original and 25 artificial intelligence-generated abstracts were conducted by 26 faculty and 4 fellows; 57% of original abstracts and 42.3% of artificial intelligence-generated abstracts were correctly identified, yielding an average accuracy of 49.7% across all abstracts. All 3 artificial intelligence detectors rated the original abstracts as less likely to be artificial intelligence-written than the ChatGPT-generated abstracts (GPTZero, 5.8% vs 73.3%; P<.001; Originality, 10.9% vs 98.1%; P<.001; Copyleaks, 18.6% vs 58.2%; P<.001). The performance of the 3 artificial intelligence detection software differed when analyzing all abstracts (P=.03), original abstracts (P<.001), and artificial intelligence-generated abstracts (P<.001). Grammarly text analysis identified more writing issues and correctness errors in original than in artificial intelligence abstracts, including lower Grammarly score reflective of poorer writing quality (82.3 vs 88.1; P=.006), more total writing issues (19.2 vs 12.8; P<.001), critical issues (5.4 vs 1.3; P<.001), confusing words (0.8 vs 0.1; P=.006), misspelled words (1.7 vs 0.6; P=.02), incorrect determiner use (1.2 vs 0.2; P=.002), and comma misuse (0.3 vs 0.0; P=.005).
    Human reviewers are unable to detect the subtle differences between human and ChatGPT-generated scientific writing because of artificial intelligence\'s ability to generate tremendously realistic text. Artificial intelligence detection software improves the identification of artificial intelligence-generated writing, but still lacks complete accuracy and requires programmatic improvements to achieve optimal detection. Given that reviewers and editors may be unable to reliably detect artificial intelligence-generated texts, clear guidelines for reporting artificial intelligence use by authors and implementing artificial intelligence detection software in the review process will need to be established as artificial intelligence chatbots gain more widespread use.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号