logo
ResearchBunny Logo
Active Prompting with Chain-of-Thought for Large Language Models

Computer Science

Active Prompting with Chain-of-Thought for Large Language Models

S. Diao, P. Wang, et al.

Large language models improve complex reasoning when guided by example-based chain-of-thought prompts. This paper introduces Active-Prompt, an uncertainty-driven method to select the most informative questions for human CoT annotation, yielding superior performance on eight complex reasoning tasks — research conducted by Shizhe Diao, Pengcheng Wang, Yong Lin, Rui Pan, Xiang Liu, and Tong Zhang.

00:00
00:00
~3 min • Beginner • English
Citation Metrics
Citations
31
Influential Citations
17
Reference Count
91
Citation by Year

Note: The citation metrics presented here have been sourced from Semantic Scholar and OpenAlex.

Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny