Codified Foreshadowing-Payoff Text Generation
Longfei Yun, Kun Zhou, Yupeng Hou, Letian Peng, Jingbo Shang
2026-01-13
Summary
This paper is about how to make stories generated by computers actually make sense, specifically by ensuring that things hinted at earlier in the story actually matter later on.
What's the problem?
Large language models, which are used to write stories, are often good at making sentences that flow together, but they struggle to remember details introduced early in the story and connect them to events later on. Think of it like setting up a 'Chekhov's gun' – introducing something important – and then completely forgetting about it. Current methods for evaluating these stories focus on whether they *sound* good, not whether they are logically consistent and follow through on earlier hints.
What's the solution?
The researchers created a new system called Codified Foreshadowing-Payoff Generation, or CFPG. This system doesn't just focus on making the story sound good; it breaks down the story into cause-and-effect relationships. It identifies foreshadowing (hints), the triggers that cause those hints to become important, and the payoff (what happens as a result). They then 'teach' the computer to recognize these patterns using examples from existing stories, making sure it doesn't just mention a hint but actually follows through with it at the right time.
Why it matters?
This work is important because it shows that simply making a computer write fluent sentences isn't enough to create a good story. To truly tell a compelling narrative, computers need to understand and implement fundamental storytelling techniques like foreshadowing and payoff. This research moves us closer to computers that can generate stories that are not only readable but also logically sound and satisfying.
Abstract
Foreshadowing and payoff are ubiquitous narrative devices through which authors introduce commitments early in a story and resolve them through concrete, observable outcomes. However, despite advances in story generation, large language models (LLMs) frequently fail to bridge these long-range narrative dependencies, often leaving "Chekhov's guns" unfired even when the necessary context is present. Existing evaluations largely overlook this structural failure, focusing on surface-level coherence rather than the logical fulfillment of narrative setups. In this paper, we introduce Codified Foreshadowing-Payoff Generation (CFPG), a novel framework that reframes narrative quality through the lens of payoff realization. Recognizing that LLMs struggle to intuitively grasp the "triggering mechanism" of a foreshadowed event, CFPG transforms narrative continuity into a set of executable causal predicates. By mining and encoding Foreshadow-Trigger-Payoff triples from the BookSum corpus, we provide structured supervision that ensures foreshadowed commitments are not only mentioned but also temporally and logically fulfilled. Experiments demonstrate that CFPG significantly outperforms standard prompting baselines in payoff accuracy and narrative alignment. Our findings suggest that explicitly codifying narrative mechanics is essential for moving LLMs from surface-level fluency to genuine narrative competence.