Getting Large Language Models (LLMs) to do what we want can be tough, especially for mathy, symbolic, or commonsense reasoning tasks. Chain-of-Thought (CoT) prompting—a prompt engineering technique that encourages LLMs to decompose large problems into smaller chunks—helped LLMs improve at these types of complex tasks so much that it spawned a slew of spinoffs seeking to improve on the original. But CoT and its siblings suffer from a glaring flaw—a lot hinges on that first thought. If it’s off-kilter, so is the rest of the chain.

Sometimes we (humans) need to explore several divergent threads from the start, encounter a few dead-end thoughts, rule those out, then retrace our steps back to some thought-fork we previously took before forging ahead down many untrodden paths before, eventually, finding a solution. This type of thinking resembles a tree more than a chain. Tree-of-Thoughts (ToT)—a riff off of CoT—is a prompt engineering approach that evokes this type of branching-out thinking in LLMs so that they can transcend left-to-right thinking and tackle problems in a more human-like, trial-and-error approach.