Academic writing

学术写作
  • 文章类型: Journal Article
    本研究探讨了语法在多大程度上可以成为学术英语写作的可靠评估工具。在高地位的学术Q.1期刊上发表的十篇文章,由专业的英语母语人士撰写,用于评估Grammarly标记问题的准确性。结果表明,Grammarly倾向于过度标记许多问题,导致许多误报;此外,它不考虑英语中的可选用法。该研究得出的结论是,尽管Grammarly可以识别出许多语言使用的模棱两可的实例,作者最好进行审查并考虑进行修订,它似乎不是评估学术书面英语的可靠工具。
    This study explores the extent to which Grammarly can be a reliable assessment tool for academic English writing. Ten articles published in high-status scholarly Q.1 journals and written by specialist English native speakers were used to evaluate the accuracy of Grammarly\'s flagged issues. The results showed that Grammarly tends to over-flag many issues resulting in many false positives; besides, it does not take into consideration optional usage in English. The study concluded that although Grammarly can identify many ambiguous instances of language use that writers would do well to review and consider for revision, it does not seem to be a reliable tool for assessing academic written English.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    这项研究调查了伊朗大学生在英语作为外语(TEFL)教学领域的著作中使用的说服策略。这项研究利用了Cialdini(说服心理学,QuillWilliamMorrow,《纽约1984》,《劝说:一种影响和说服的革命性方式》,Simon&Schuster,NewYork2016),其中包括“互惠”,“承诺和一致性”,\'社会证明\',\'喜欢\',\'权限\',\'稀缺性\',和“团结”。结果表明,诸如“喜欢”之类的策略,\'团结\',和“权威”比其他有说服力的策略更频繁地被使用。另一方面,“稀缺性”是参与者使用最少的策略。在数据中也观察到显著的性别差异。这些发现具有重要的教学意义,并建议有必要将说服力策略纳入教学材料和教学实践中,以增强大学生的说服力写作能力。此外,性别差异凸显了在进行有说服力的写作教学时考虑个体差异的重要性。最后,这项研究讨论了这些发现在学习和教学背景下的教学意义。
    This study investigates persuasive strategies used in the writings of Iranian university students in the field of teaching English as foreign language (TEFL). The study utilized the 7 principles of persuasive strategies presented by Cialdini (The psychology of persuasion, Quill William Morrow, New York 1984; Pre-suasion: A revolutionary way to influence and persuade, Simon & Schuster, New York 2016), which include \'reciprocity\', \'commitment and consistency\', \'social proof\', \'liking\', \'authority\', \'scarcity\', and \'unity\'. The results indicate that strategies such as \'liking\', \'unity\', and \'authority\' were used more frequently than other persuasive strategies. On the other hand, \'scarcity\' was the least used strategy by the participants. Significant gender differences were also observed in the data. These findings have important pedagogical implications and suggest the need to incorporate persuasive strategies into instructional materials and teaching practices to enhance the persuasive writing skills of university students. Furthermore, gender differences highlight the importance of considering individual differences when teaching persuasive writing. Finally, the study discusses the pedagogical implications of these findings in the context of learning and teaching.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:学术论文写作在医学生的教育中具有重要意义,并对母语不是英语的人提出了明显的挑战。本研究旨在调查采用大型语言模型的有效性,尤其是ChatGPT,提高这些学生的英语学术写作能力。
    方法:招募了25名来自中国的三年级医学生。该研究包括两个阶段。首先,学生们被要求写一篇迷你论文。其次,要求学生在两周内使用ChatGPT修改迷你论文。对微型文件的评估集中在三个关键方面,包括结构,逻辑,和语言。评估方法结合了使用ChatGPT-3.5和ChatGPT-4模型的手动评分和AI评分。此外,我们采用问卷收集学生使用ChatGPT的经验反馈。
    结果:在实施ChatGPT进行写作帮助后,人工得分显着增加了4.23分。同样,基于ChatGPT-3.5模型的AI评分增加了4.82分,而ChatGPT-4模型显示增加3.84点。这些结果凸显了大型语言模型在支持学术写作方面的潜力。统计学分析显示人工评分与ChatGPT-4评分无显著差异,表明ChatGPT-4在评分过程中协助教师的潜力。问卷的反馈表明,学生的反应总体上是积极的,92%的人承认他们的写作质量有所改善,84%的人注意到他们语言技能的进步,76%的人认识到ChatGPT在支持学术研究方面的贡献。
    结论:该研究强调了像ChatGPT这样的大型语言模型在医学教育中提高非母语人士的英语学术写作能力的功效。此外,它说明了这些模型对教育评估过程做出贡献的潜力,特别是在英语不是主要语言的环境中。
    BACKGROUND: Academic paper writing holds significant importance in the education of medical students, and poses a clear challenge for those whose first language is not English. This study aims to investigate the effectiveness of employing large language models, particularly ChatGPT, in improving the English academic writing skills of these students.
    METHODS: A cohort of 25 third-year medical students from China was recruited. The study consisted of two stages. Firstly, the students were asked to write a mini paper. Secondly, the students were asked to revise the mini paper using ChatGPT within two weeks. The evaluation of the mini papers focused on three key dimensions, including structure, logic, and language. The evaluation method incorporated both manual scoring and AI scoring utilizing the ChatGPT-3.5 and ChatGPT-4 models. Additionally, we employed a questionnaire to gather feedback on students\' experience in using ChatGPT.
    RESULTS: After implementing ChatGPT for writing assistance, there was a notable increase in manual scoring by 4.23 points. Similarly, AI scoring based on the ChatGPT-3.5 model showed an increase of 4.82 points, while the ChatGPT-4 model showed an increase of 3.84 points. These results highlight the potential of large language models in supporting academic writing. Statistical analysis revealed no significant difference between manual scoring and ChatGPT-4 scoring, indicating the potential of ChatGPT-4 to assist teachers in the grading process. Feedback from the questionnaire indicated a generally positive response from students, with 92% acknowledging an improvement in the quality of their writing, 84% noting advancements in their language skills, and 76% recognizing the contribution of ChatGPT in supporting academic research.
    CONCLUSIONS: The study highlighted the efficacy of large language models like ChatGPT in augmenting the English academic writing proficiency of non-native speakers in medical education. Furthermore, it illustrated the potential of these models to make a contribution to the educational evaluation process, particularly in environments where English is not the primary language.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    近几十年来,公众越来越依赖科学研究的结果来进行决策。然而,科学写作通常以大量使用技术语言为特征,这可能给科学界以外的人带来挑战。为了缓解这个问题,引入了简单的语言摘要,以清晰易懂的语言提供了科学论文的简短摘要。尽管人们越来越重视对简单语言摘要的研究,人们对这些摘要是否可供预期受众阅读知之甚少。基于从六个生物医学和生命科学期刊中抽取的大型语料库,本研究在技术层面上研究了简单语言摘要和科学摘要的可读性和术语使用。发现(1)简单的语言摘要比科学摘要更具可读性,(2)平语摘要的阅读等级水平与科学摘要的阅读等级水平中等相关,(3)研究人员在简单的语言摘要中使用的行话少于科学摘要,(4)通俗易懂的语言摘要和科学摘要的可读性和术语使用超过了一般公众的推荐阈值。讨论了这些发现并给出了可能的解释。提供了对学术写作和科学交流的启示。
    In recent decades, members of the general public have become increasingly reliant on findings of scientific studies for decision-making. However, scientific writing usually features a heavy use of technical language, which may pose challenges for people outside of the scientific community. To alleviate this issue, plain language summaries were introduced to provide a brief summary of scientific papers in clear and accessible language. Despite increasing attention paid to the research of plain language summaries, little is known about whether these summaries are readable for the intended audiences. Based on a large corpus sampled from six biomedical and life sciences journals, the present study examined the readability and jargon use of plain language summaries and scientific abstracts on a technical level. It was found that (1) plain language summaries were more readable than scientific abstracts, (2) the reading grade levels of plain language summaries were moderately correlated with that of scientific abstracts, (3) researchers used less jargon in plain language summaries than in scientific abstracts, and (4) the readability of and the jargon use in both plain language summaries and scientific abstracts exceeded the recommended threshold for the general public. The findings were discussed with possible explanations. Implications for academic writing and scientific communication were offered.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    尽管以前的先入为主的观念阻碍了作者在研究文章(RA)中断言自己的存在,最近的研究已经证实,使用自述标记提供了一种手段来建立作者的身份和承认在一个特定的学科。很少有研究,然而,探索研究文章的特定部分,以揭示自我提及在每个部分的约定中的作用。探索使用自述标记,本研究旨在比较心理学领域中由英语母语作家和L-1波斯作家撰写的方法部分。语料库包含120个RA,每个子语料库包括60个RA。然后对RA进行结构和功能检查。对数据进行了定量分析,使用频率计数和卡方分析,并通过内容分析定性。研究结果表明,英语和波斯语作者在自我提及的频率和修辞功能的维度方面存在显着差异;但是,语法形式、对冲和提升维度的差异被发现是微不足道的。以英语为母语的作者倾向于在他们的研究文章中更多地使用自我提及。当前研究的结果可以帮助EAP和ESP新手研究人员认识到每种类型的作者身份惯例。
    Although previous preconceived notions discourage authors from asserting their presence in research articles (RAs), recent studies have substantiated that the use of self-mention markers offer a means to establish authorial identity and recognition in a given discipline. Few studies, however, explored specific sections of research articles to uncover how self-mentions function within each section\'s conventions. Exploring the use of self-mention markers, the present study aimed at comparing the method sections written by native English writers and L-1 Persian writers in the field of psychology. The corpus contained 120 RAs, with each sub-corpora including 60 RAs. The RAs were then examined structurally and functionally. The data were analyzed both quantitatively, using frequency counts and chi-square analyses, and qualitatively through content analysis. The findings indicated a significant difference between English and Persian authors concerning the frequency of self-mentions and the dimension of rhetorical functions; however, the differences in the dimensions of grammatical forms and hedging and boosting were found insignificant. Native English authors were inclined to make more use of self-mentions in their research articles. The findings of the current study can assist EAP and ESP novice researchers in taking cognizance of the conventions of authorial identity in each genre.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    出版物的世界对新来者来说似乎令人生畏和封闭。那么一个人是如何开始踏进门的呢?在这篇论文中,作者从文献和他们最近作为编辑实习生的生活经验中汲取经验,在访问主题下考虑这一挑战,以及它如何与学术出版物的各个组成部分重叠。本文讨论了出版物“机器”的主要三个组成部分,创作,reviewing,和编辑。这些之前是第一个,可以说是基础,与学术期刊出版-阅读的互动。如果不阅读不同期刊的文章,甚至在不同的学科中,了解奖学金的广度和目的是不可能的。创作的后续组件,reviewing,和编辑,通过进一步阅读对当前文学的持续熟悉,将在本文的其余部分中进一步详细讨论,提供了关于如何获得这些领域的访问和经验的实用建议,例如,撰写非研究文章手稿,参与协作同行评审,并在机会出现时(坚持不懈地)申请编辑机会。医学教育出版物似乎令人生畏,并且对入门级学者不开放。这篇文章是为了消除这种观点而写的,并挑战了出版世界仅供专家使用的观念。相反,该领域的新来者对于学术出版物保持相关性至关重要,活力,和创新,特别是面对不断变化的医学教育格局。
    The world of publication can seem intimidating and closed to the newcomer. How then does one even begin to get a foot in the door? In this paper, the authors draw from the literature and their recent lived experience as editorial interns to consider this challenge under the theme of access, and how it overlaps with the various components of academic publication. The main three components of the publication \'machine\' are discussed in this article, authoring, reviewing, and editing. These are preceded by the first, and arguably foundational, interaction with academic journal publishing-reading. Without reading articles across different journals, and even in different disciplines, understanding the breadth of scholarship and its purpose is impossible. The subsequent components of authoring, reviewing, and editing, which are all enhanced by ongoing familiarity with current literature through further reading, are considered in further detail in the remainder of this article, with practical advice provided as to how to gain access and experience in each of these areas, for example, writing non-research article manuscripts, engaging in collaborative peer review, and applying for editorial opportunities (with perseverance) when the opportunity presents itself. Medical education publication can seem daunting and closed to entry-level academics. This article is written to dispel this view, and challenges the notion that the world of publication is reserved for experts only. On the contrary, newcomers to the field are essential for academic publications to retain relevance, dynamism, and innovation particularly in the face of the changing landscape of medical education.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本研究审查了为支持学术研究中的文献回顾和分析而量身定制的免费AI工具,强调他们对直接询问的反应。通过有针对性的关键词搜索,我们对相关的人工智能工具进行了分类,并评估了它们的输出变化和来源有效性。我们的结果揭示了一系列的响应质量,与一些工具整合非学术来源和其他根据过时的信息。值得注意的是,大多数工具在源选择方面缺乏透明度。我们的研究强调了两个关键的局限性:排除商业AI工具,以及仅关注接受直接研究查询的工具。这引发了有关付费工具的潜在功能以及将各种AI工具结合起来以增强研究成果的有效性的问题。未来的研究应该探索不同人工智能工具的集成,评估商业工具的影响,并研究响应变异性背后的算法。这项研究有助于更好地理解人工智能在学术研究中的作用,强调在学术努力中仔细选择和批判性评估这些工具的重要性。
    This study scrutinizes free AI tools tailored for supporting literature review and analysis in academic research, emphasizing their response to direct inquiries. Through a targeted keyword search, we cataloged relevant AI tools and evaluated their output variation and source validity. Our results reveal a spectrum of response qualities, with some tools integrating non-academic sources and others depending on outdated information. Notably, most tools showed a lack of transparency in source selection. Our study highlights two key limitations: the exclusion of commercial AI tools and the focus solely on tools that accept direct research queries. This raises questions about the potential capabilities of paid tools and the efficacy of combining various AI tools for enhanced research outcomes. Future research should explore the integration of diverse AI tools, assess the impact of commercial tools, and investigate the algorithms behind response variability. This study contributes to a better understanding of AI\'s role in academic research, emphasizing the importance of careful selection and critical evaluation of these tools in academic endeavors.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:评估ChatGPT-4产生关于生育力保存的生物医学综述文章的能力。
    方法:ChatGPT-4被提示为男性和青春期前男孩的生育力保护概述。ChatGPT-4提供的大纲随后用于提示ChatGPT-4撰写评论的不同部分,并为每个部分提供五个参考。文章的不同部分和提供的参考文献相结合,以创建一个由作者评估的单一科学评论,他们是生育能力保护方面的专家。专家们评估了文章和参考文献的准确性,并使用在线工具检查剽窃行为。此外,两位专家都独立对相关性进行了评分,深度,以及ChatGPT-4的文章使用的评分矩阵范围从0到5,其中较高的分数表示较高的质量。
    结果:ChatGPT-4成功地生成了一篇带有参考文献的相关科学文章。在27个需要引用的陈述中,四个是不准确的。在25个参考文献中,36%是准确的。48%的标题正确,但其他错误,16%是完全制造的。抄袭率最低(平均值=3%)。专家对文章的相关性进行了高度评价(5/5),但在深度(2-3/5)和当前性(3/5)方面得分较低。
    结论:ChatGPT-4可以在最小抄袭的情况下对生育力保护进行科学审查。虽然内容精确,它显示了事实和上下文的不准确和不一致的参考可靠性。这些问题限制了ChatGPT-4作为科学写作的唯一工具,但暗示了它在写作过程中的潜力。
    OBJECTIVE: To evaluate the ability of ChatGPT-4 to generate a biomedical review article on fertility preservation.
    METHODS: ChatGPT-4 was prompted to create an outline for a review on fertility preservation in men and prepubertal boys. The outline provided by ChatGPT-4 was subsequently used to prompt ChatGPT-4 to write the different parts of the review and provide five references for each section. The different parts of the article and the references provided were combined to create a single scientific review that was evaluated by the authors, who are experts in fertility preservation. The experts assessed the article and the references for accuracy and checked for plagiarism using online tools. In addition, both experts independently scored the relevance, depth, and currentness of the ChatGPT-4\'s article using a scoring matrix ranging from 0 to 5 where higher scores indicate higher quality.
    RESULTS: ChatGPT-4 successfully generated a relevant scientific article with references. Among 27 statements needing citations, four were inaccurate. Of 25 references, 36% were accurate, 48% had correct titles but other errors, and 16% were completely fabricated. Plagiarism was minimal (mean = 3%). Experts rated the article\'s relevance highly (5/5) but gave lower scores for depth (2-3/5) and currentness (3/5).
    CONCLUSIONS: ChatGPT-4 can produce a scientific review on fertility preservation with minimal plagiarism. While precise in content, it showed factual and contextual inaccuracies and inconsistent reference reliability. These issues limit ChatGPT-4 as a sole tool for scientific writing but suggest its potential as an aid in the writing process.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    ChatGPT,它可以使用互联网资源自动生成对查询的书面响应,在2022年底发布后很快就病毒式传播。ChatGPT在医学检查中的表现显示结果接近通过阈值,使其与三年级医学生相当。它还可以在可接受的水平上撰写学术摘要或评论。然而,尚不清楚ChatGPT如何处理有害内容,错误信息或抄袭;因此,专业使用ChatGPT进行学术写作的作者应该谨慎。ChatGPT还具有以各种方式促进医疗保健提供者和患者之间的互动的潜力。然而,复杂的任务,如理解人体解剖学仍然是ChatGPT的限制。ChatGPT可以简化放射学报告,但是不正确的陈述和丢失医疗信息的可能性仍然存在。尽管ChatGPT有可能改变医疗实践,教育和研究,这种应用需要进一步改进,以便在医学中经常使用。
    ChatGPT, which can automatically generate written responses to queries using internet sources, soon went viral after its release at the end of 2022. The performance of ChatGPT on medical exams shows results near the passing threshold, making it comparable to third-year medical students. It can also write academic abstracts or reviews at an acceptable level. However, it is not clear how ChatGPT deals with harmful content, misinformation or plagiarism; therefore, authors using ChatGPT professionally for academic writing should be cautious. ChatGPT also has the potential to facilitate the interaction between healthcare providers and patients in various ways. However, sophisticated tasks such as understanding the human anatomy are still a limitation of ChatGPT. ChatGPT can simplify radiological reports, but the possibility of incorrect statements and missing medical information remain. Although ChatGPT has the potential to change medical practice, education and research, further improvements of this application are needed for regular use in medicine.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:自2022年末ChatGPT发布以来,大型语言模型(LLM)已得到重视。
    目的:本研究的目的是评估ChatGPT(GPT-3.5)在两个不同的学术领域中引用和参考文献的准确性:自然科学和人文科学。
    方法:两名研究人员独立提示ChatGPT为手稿撰写介绍部分并包括引文;然后他们评估了引文和数字对象标识符(DOI)的准确性。对这两个学科的结果进行了比较。
    结果:包括十个主题,包括5个自然科学和5个人文科学。共引文102次,自然科学55人,人文科学47人。其中,自然科学中的40篇引文(72.7%)和人文科学中的36篇引文(76.6%)被证实存在(P=.42)。在自然科学(39/55,70.9%)和人文学科(18/47,38.3%)的DOI存在显着差异,两个学科之间的准确性存在显着差异(18/55,32.7%vs4/47,8.5%)。DOI幻觉在人文学科中更为普遍(42/55,89.4%)。人文科学中的Levenshtein距离明显高于自然科学,反映DOI精度较低。
    结论:ChatGPT在产生引文和参考文献方面的表现因学科而异。DOI标准和学科细微差别的差异导致了性能差异。研究人员应考虑人工智能写作工具在引文准确性方面的优势和局限性。特定于域的模型的使用可以提高准确性。
    BACKGROUND: Large language models (LLMs) have gained prominence since the release of ChatGPT in late 2022.
    OBJECTIVE: The aim of this study was to assess the accuracy of citations and references generated by ChatGPT (GPT-3.5) in two distinct academic domains: the natural sciences and humanities.
    METHODS: Two researchers independently prompted ChatGPT to write an introduction section for a manuscript and include citations; they then evaluated the accuracy of the citations and Digital Object Identifiers (DOIs). Results were compared between the two disciplines.
    RESULTS: Ten topics were included, including 5 in the natural sciences and 5 in the humanities. A total of 102 citations were generated, with 55 in the natural sciences and 47 in the humanities. Among these, 40 citations (72.7%) in the natural sciences and 36 citations (76.6%) in the humanities were confirmed to exist (P=.42). There were significant disparities found in DOI presence in the natural sciences (39/55, 70.9%) and the humanities (18/47, 38.3%), along with significant differences in accuracy between the two disciplines (18/55, 32.7% vs 4/47, 8.5%). DOI hallucination was more prevalent in the humanities (42/55, 89.4%). The Levenshtein distance was significantly higher in the humanities than in the natural sciences, reflecting the lower DOI accuracy.
    CONCLUSIONS: ChatGPT\'s performance in generating citations and references varies across disciplines. Differences in DOI standards and disciplinary nuances contribute to performance variations. Researchers should consider the strengths and limitations of artificial intelligence writing tools with respect to citation accuracy. The use of domain-specific models may enhance accuracy.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号