Computer Science
A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery
Y. Zhang, X. Chen, et al.
This paper provides a comprehensive survey of over 260 scientific LLMs, unveiling cross-field and cross-modal connections in architectures and pre-training techniques, summarizing pre-training datasets and evaluation tasks for each field and modality, and examining deployments that accelerate scientific discovery. Resources are available at https://github.com/yuzhimanhua/Awesome-Scientific-Language-Models. This research was conducted by Yu Zhang, Xiusi Chen, Bowen Jin, Sheng Wang, Shuiwang Ji, Wei Wang, and Jiawei Han.
Related Publications
Explore these studies to deepen your understanding of the subject.

