logo
ResearchBunny Logo
Abstract
This paper introduces the use of large language models (LLMs), specifically generative pre-trained transformers (GPT), to enhance materials language processing (MLP). The authors address challenges associated with complex model architectures and extensive fine-tuning in traditional MLP approaches by employing strategic prompt engineering with GPT models. Their findings demonstrate high performance in text classification, named entity recognition (NER), and extractive question answering (QA) using limited datasets, even with zero-shot or few-shot learning. The GPT-based approach is shown to be effective across various materials classes and can assist materials scientists in knowledge-intensive MLP tasks, even without specialized expertise. Furthermore, the authors highlight the potential of GPT models to reduce researcher workload by providing initial labeling sets and validating human annotations.
Publisher
Communications Materials
Published On
Feb 15, 2024
Authors
Jaewoong Choi, Byungju Lee
Tags
large language models
generative pre-trained transformers
materials language processing
text classification
named entity recognition
zero-shot learning
research workload reduction
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs—just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny