Ten simple rules for using large language models in science, version 1.0

Smith, Gabriel Reuben and Bello, Carolina and Bialic-Murphy, Lalasia and Clark, Emily and Delavaux, Camille S. and Fournier de Lauriere, Camille and van den Hoogen, Johan and Lauber, Thomas and Ma, Haozhi and Maynard, Daniel S. and Mirman, Matthew and Mo, Lidong and Rebindaine, Dominic and Reek, Josephine Elena and Werden, Leland K. and Wu, Zhaofei and Yang, Gayoung and Zhao, Qingzhou and Zohner, Constantin M. and Crowther, Thomas W. and Schwartz, Russell (2024) Ten simple rules for using large language models in science, version 1.0. PLOS Computational Biology, 20 (1). e1011767. ISSN 1553-7358

[thumbnail of journal.pcbi.1011767.pdf] Text
journal.pcbi.1011767.pdf - Published Version

Download (548kB)

Abstract

Generative artificial intelligence (AI) tools, including large language models (LLMs), are expected to radically alter the way we live and work, with as many as 300 million jobs at risk [1]. Arguably the most well-known LLM currently is GPT (generative pre-trained transformer), developed by American company OpenAI [2]. Since its release in late 2022, GPT’s chatbot interface, ChatGPT, has exploded in popularity, setting a new record for the fastest growing user base in history [3]. The appeal of GPT and other LLMs stem from their ability to effectively carry out multistep tasks and provide clear, human-like responses to complicated queries and prompts (Box 1). Unsurprisingly, this capacity is catching the eye of scientists [4].

Item Type: Article
Subjects: Euro Archives > Biological Science
Depositing User: Managing Editor
Date Deposited: 23 Mar 2024 08:15
Last Modified: 23 Mar 2024 08:15
URI: http://publish7promo.com/id/eprint/4595

Actions (login required)

View Item
View Item