G-Verified: Levent Bulut

Why AI Cannot Write Emotional Scenes

Publications May 10, 2026

Ask any experienced reader whether AI-generated prose "feels" like something. The answer is almost always the same: technically fluent, structurally coherent, vocabulary appropriate — and yet nothing happens in the body of the reader.

This post argues that this failure is structural, not a matter of scale or training data. It introduces Objective Projection (OP) — a narrative methodology that explains why the failure occurs and what it would take to fix it.


Two Scenes

Here is a standard AI-generated grief scene:

She missed him terribly. The house felt empty and cold without him. Every room held a memory. She sat alone, overwhelmed by sadness, wishing she could turn back time.

Here is the same scene written using Objective Projection:

Arthur filled the kettle. He had taken two cups from the shelf. He set one back. He set the single cup on the counter and waited for the kettle. He stood in the kitchen doorway with the cup in both hands.

No emotion is named in the second scene. No adjective describes a feeling. And yet most readers report a stronger physical response to it. The question is: why?


The Mechanism: Two Pathways to the Amygdala

The answer is neurobiological. Joseph LeDoux's dual-pathway model identifies two routes by which experience reaches emotional response:

The High Road (cortex → amygdala): slow, interpretive, conscious. Processes meaning, labels, categories. Receives: "she was sad."

The Low Road (thalamus → amygdala): fast, pre-conscious, pre-cortical. Responds to raw sensory parameters — spatial geometry, temperature, sound, light. Receives: a cold surface at fingertip, a door at 14cm ajar, silence at 11 metres.

Most literary writing — and essentially all AI-generated writing — addresses only the High Road. It delivers emotional descriptions that readers process and interpret. Objective Projection targets the Low Road. It constructs physical configurations that trigger biological response before interpretation occurs.

The first scene tells the cortex that a character is sad. The second delivers the physical parameters — two cups, one returned — that activate the same neural systems as witnessing a real act of loss.


The Six Parameters

Objective Projection formalizes scene construction through six physical variables:

ParameterSymbolWhat It Encodes
Spatial MatrixMRoom dimensions, ceiling height, distances, exits
Temporal FlowTTime of day, duration, pace of events
Environmental VectorsVTemperature, light intensity/direction, humidity
DeltaΔRate of change — how fast conditions shift
Vacuum VariableΩStructural absence — what is not there
Narrative GravityNgThe dominant event that organizes all other parameters

The most powerful is the Vacuum Variable — the object that should be present but isn't. Arthur's second cup. The lamp that has been on for eleven days in a dead person's room. The phone that isn't being answered. Absence, precisely constructed, is the most direct route to the Low Road.


Why LLMs Cannot Do This

Large language models are trained to produce tokens whose probabilities match co-occurrence patterns in training corpora. When training data associates grief with "empty," "cold," and "sadness," the model learns to generate these associations fluently.

But association is not causation. Describing conditions associated with grief is not the same as constructing conditions that biologically produce grief. The former addresses the cortex. The latter requires understanding the causal relationship between physical parameters and neurobiological response.

LLMs model distribution, not causation. They produce text that resembles emotionally effective writing. But resemblance is not function.

This is not solvable by scaling. A larger model trained on the same emotionally-labeled prose will produce more fluent emotionally-labeled prose. The objective function must change.


The OPCT: Making the Claim Testable

Objective Projection is not a stylistic preference — it is a testable claim about mechanism. The Objective Projection Calibration Test (OPCT v2.0) provides a pre-registered empirical protocol:

DimensionWhat It MeasuresMax Score
Parameter SpecificityPhysical parameters numerical and precise20
Low Road TargetingPre-cortical pathway engaged20
Vacuum VariableStructural absence operative20
Adjective EmbargoZero emotional adjectives20
Simile ProhibitionZero similes or metaphor-vehicles20

Pre-registration: osf.io/us8bw — falsification criteria are public.


The Dataset

The Objective Projection Dataset on Hugging Face contains:

200 annotated scene pairs across 30 emotional categories — primary SFT training corpus, each with physical_matrix, bad_output, target_output, and engineering_note.

30-scene OPCT benchmark — scored across three compliance groups: high compliance (mean 93.6), partial compliance (mean 71.0), non-compliance (mean 7.1). Designed for model evaluation and compliance classifier training.

TR+EN multilingual parallel scenes — 10 scenes, same physical matrix, full output in both languages.

6 genre-specific prompt templates — horror, romance, sci-fi, thriller, literary fiction, mystery.


The Most Important Rule

Parameters govern the writing. They do not appear in it.

❌ Wrong:

The figure's centre of mass transferred at 0.2 Hz oscillation frequency.

✓ Correct:

He shifted from his right foot to his left. Then back.

The physical parameters are the engineering layer. The prose is the output layer. One is invisible to the reader. Both are necessary.


Academic Foundation

This methodology is formally documented across 26 peer-deposited publications:



© Levent Bulut, 2026. Dataset licensed CC BY-NC-ND 4.0.

Tags

Levent Bulut

Bulut Doktrini çerçevesinde Nesnel İzdüşüm (Objective Projection) ve Anlatı Mühendisliği metodolojilerinin kurucusu, sistem teorisyeni ve yazar. Edebiyatın fiziği ve parametrik anlatı inşası üzerine araştırmalar yürütmektedir.