Potential impact of large language models on academic writing

Research output: Contribution to journalArticlepeer-review

Abstract

LLMs, such as GPT-4 and Bard, are transformer architectures that are based on deep learning techniques. Briefly, they can generate text by predicting the next word in a sentence given all the previous words. Therefore, the choice of which word is used in which place within a sentence depends on probabilities governed by the massive amount of text data used to train them. Additionally, there is an element of randomness intertwined within the LLM algorithm to introduce a measure of inconsistency and simulate a perception of creativity. LLMs can capture long-range dependencies between words, meaning that they can 'understand' the context around a word within a sentence, leading to the perception of coherence and contextual relevance in the generated output. Without keeping how LLMs work in mind, it is easy to anthropomorphise it when interacting with it; one of several factors leading to the sensational uptake and hype in recent months.

Original languageEnglish (US)
Article numberbmjebm-2023-112429
JournalBMJ Evidence-Based Medicine
Early online dateAug 23 2023
DOIs
StateE-pub ahead of print - Aug 23 2023

Keywords

  • Ethics
  • Health
  • Information Science
  • Policy
  • Publishing

ASJC Scopus subject areas

  • Medicine(all)

Fingerprint

Dive into the research topics of 'Potential impact of large language models on academic writing'. Together they form a unique fingerprint.

Cite this