Navigating solution spaces in large language models through controlled embedding exploration

Abstract

Large Language Models (LLMs) struggle with complex reasoning due to limited diversity and inefficient search. We propose an embedding-based search framework that optimises the embedding of the first token to guide generation. It combines (1) Embedding perturbation for controlled exploration and (2) Bayesian optimisation to refine embeddings via a verifier-guided objective, balancing exploration and exploitation. This approach improves reasoning accuracy and coherence while avoiding reliance on heuristic search. Experiments demonstrate superior correctness with minimal computation, making it a scalable, model-agnostic solution.

Publication
The 42nd International Conference on Machine Learning (ICML)
Yudong Chen
Yudong Chen
Assistant Professor in Statistics

My research interests include changepoint detection, high-dimensional statistics, robust statistcs, online algorithms and machine learning.