logo
ResearchBunny Logo
A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery

Computer Science

A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery

Y. Zhang, X. Chen, et al.

This paper provides a comprehensive survey of over 260 scientific LLMs, unveiling cross-field and cross-modal connections in architectures and pre-training techniques, summarizing pre-training datasets and evaluation tasks for each field and modality, and examining deployments that accelerate scientific discovery. Resources are available at https://github.com/yuzhimanhua/Awesome-Scientific-Language-Models. This research was conducted by Yu Zhang, Xiusi Chen, Bowen Jin, Sheng Wang, Shuiwang Ji, Wei Wang, and Jiawei Han.

00:00
00:00
~3 min • Beginner • English
Abstract
In many scientific fields, large language models (LLMs) have revolutionized the way text and other modalities of data (e.g., molecules and proteins) are handled, achieving superior performance in various applications and augmenting the scientific discovery process. Nevertheless, previous surveys on scientific LLMs often concentrate on one or two fields or a single modality. In This paper, we aim to provide a more holistic view of the research landscape by unveiling cross-field and cross-modal connections between scientific LLMs regarding their architectures and pre-training techniques. To this end, we comprehensively survey over 260 scientific LLMs, discuss their commonalities and differences, as well as summarize pre-training datasets and evaluation tasks for each field and modality. Moreover, we investigate how LLMs have been deployed to benefit scientific discovery. Resources related to this survey are available at https://github.com/yuzhimanhua/Awesome-Scientific-Language-Models.
Publisher
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Published On
Nov 12, 2024
Authors
Yu Zhang, Xiusi Chen, Bowen Jin, Sheng Wang, Shuiwang Ji, Wei Wang, Jiawei Han
Tags
scientific LLMs
cross-field connections
cross-modal modeling
pre-training techniques
architectures
datasets and evaluation
scientific discovery deployment
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny