Abulhair Saparov - Purdue University
Students: Fall 2025, unless noted otherwise, sessions will be virtual on Zoom.
Can/Will LLMs Learn to Reason?
Nov 12, 2025
Download:
Watch on YouTube
Abstract
Reasoning—the process of drawing conclusions from prior knowledge—is a hallmark of intelligence. Large language models, and more recently, large reasoning models have demonstrated impressive results on many reasoning-intensive benchmarks. Careful studies over the past few years have revealed that LLMs may exhibit some reasoning behavior, and larger models tend to do better on reasoning tasks. However, even the largest current models still struggle on various kinds of reasoning problems. In this talk, we will try to address the question: Are the observed reasoning limitations of LLMs fundamental in nature? Or will they be resolved by further increasing the size and data of these models, or by better techniques for training them? I will describe recent work to tackle this question from several different angles. The answer to this question will help us to better understand the risks posed by future LLMs as vast resources continue to be invested in their development.About the Speaker

Ways to Watch

