Abstract
LLMs, such as GPT-4 and Bard, are transformer architectures that are based on deep learning techniques. Briefly, they can generate text by predicting the next word in a sentence given all the previous words. Therefore, the choice of which word is used in which place within a sentence depends on probabilities governed by the massive amount of text data used to train them. Additionally, there is an element of randomness intertwined within the LLM algorithm to introduce a measure of inconsistency and simulate a perception of creativity. LLMs can capture long-range dependencies between words, meaning that they can 'understand' the context around a word within a sentence, leading to the perception of coherence and contextual relevance in the generated output. Without keeping how LLMs work in mind, it is easy to anthropomorphise it when interacting with it; one of several factors leading to the sensational uptake and hype in recent months.
Original language | English (US) |
---|---|
Article number | bmjebm-2023-112429 |
Pages (from-to) | 201-202 |
Number of pages | 2 |
Journal | BMJ Evidence-Based Medicine |
Volume | 29 |
Issue number | 3 |
Early online date | Aug 23 2023 |
DOIs | |
State | E-pub ahead of print - Aug 23 2023 |
Keywords
- Ethics
- Health
- Information Science
- Policy
- Publishing
ASJC Scopus subject areas
- General Medicine