logo
ResearchBunny Logo
Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs

Computer Science

Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs

X. Zhang, C. Du, et al.

The recent chain-of-thought (CoT) method generates explicit reasoning paths but can be suboptimal, while tree-of-thought (ToT) finds better paths at a high inference cost. This work shows that fine-tuning LLMs using ToT search trees via Chain of Preference Optimization (CPO) lets CoT match or surpass ToT performance without heavy inference. Research conducted by Authors present in <Authors> tag.... show more
Abstract
The recent development of chain-of-thought (CoT) decoding has enabled large language models (LLMs) to generate explicit logical reasoning paths for complex problem-solving. However, research indicates that these paths are not always deliberate and optimal. The tree-of-thought (ToT) method employs tree-searching to extensively explore the reasoning space and find better reasoning paths that CoT decoding might overlook. This deliberation, however, comes at the cost of significantly increased inference complexity. In this work, we demonstrate that fine-tuning LLMs leveraging the search tree constructed by ToT allows CoT to achieve similar or better performance, thereby avoiding the substantial inference burden. This is achieved through Chain of Preference Optimization (CPO), where LLMs are fine-tuned to align each step of the CoT reasoning paths with those of ToT using the inherent preference information in the tree-search process. Extensive experimental results show that CPO significantly improves LLM performance in solving a variety of complex problems, including question answering, fact verification, and arithmetic reasoning, demonstrating its effectiveness. Our code is available at https://github.com/sail-sg/CPO.
Publisher
NeurIPS 2024
Published On
Authors
Xuan Zhang, Chao Du, Tianyu Pang, Qian Liu, Wei Gao, Min Lin
Tags
Chain-of-Thought (CoT)
Tree-of-Thought (ToT)
Chain of Preference Optimization (CPO)
LLM fine-tuning
reasoning alignment
inference efficiency
tree-search supervision
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny