What Extinction Code Taught Me About Claude

*Extinction Code started cooler than it ended.

It was one of the first real series I tried to write this way, back when people barely even had language for what AI-assisted fiction was. It was also the first time I really built out a series bible in this mode, the first time I let myself play that hard with sci-fi identity, future logic, and names that felt bigger than life. That was the era of things like Rex Voltage, when I still thought I was the coolest bastard in the room for even trying it.

I do not want to flatten that into embarrassment just because the model eventually drifted. The premise still has heat. A last surviving male moving through a matriarchal dystopia. Hybrid twins. Chrome, desire, engineered order, identity under pressure. Enough to carry a trilogy if the voice stays alive.

And the audience was there. The books got downloads. People were curious. What was missing was the harder layer that makes a concept stick: control, distinction, and a voice strong enough to survive the model's habits.

It did not stay alive.

That failure is more interesting than another lazy "AI ruins art" sermon.

What happened was not evil, and it was not mystical. The model caught a few successful rhythms and started writing from pattern memory instead of writer intent. Cyberpunk nouns. Repeated similes. Same-mouth dialogue. World rules bending whenever the plot wanted fireworks. The machine stopped serving the story and started serving the pattern.

Pattern gravity.

Why Claude Should Care

Long-form fiction is a brutal evaluation surface for AI, and I do not think the labs take that seriously enough.

Code can hide some repetition because functionality carries it. Marketing copy can survive on momentum, compression, and polish. A novel cannot.

A book exposes every bad habit eventually — phrase repetition, same-mouth dialogue, world-rule drift, plot convenience, emotional fakery. You can get away with one weak paragraph. You cannot get away with the same sentence skeleton 80 times. You cannot fake character voice for 300 pages. You cannot keep telling the reader reality is ripping, bending, and screaming without eventually smelling the machine behind the curtain.

If Claude wants to become a serious writing collaborator, not just a dazzling short-form improviser, this is exactly the kind of failure it should study.

What Went Wrong In Extinction Code

The books had bones. What they lacked was enough structure to keep the model from reverting to its favorite tricks.

The ugliest failure was repetition. Once the model found a rhythm that sounded plausibly cyberpunk, it kept reaching for it. Neon. Chrome. Mercury. Quantum anything. The same pressure sentences. The same dramatic weather. The same mouth on too many characters.

Then came rule drift. Surveillance was omnipotent until the plot needed a gap. A character had the right access when the scene needed movement. The twins were limited until they needed to become mythic. World-building started as pressure and ended as weather.

Two years ago this problem was worse. The models were cruder in practice. The context handling was shakier. A long project could slide into pattern and stay there unless the writer forced it back onto the rails. The buzzword then was that AI amplifies. True enough. It amplified speed, appetite, output, and possibility. It also amplified patterns.

The funny part is that later books got better. Other projects got better. I got better. The tools got better too. One reason I still want to revisit Extinction Code is that early attempts often have something cleaner in them than later, more technically competent work. Less polish maybe, but more raw voltage.

The Notes Prove We Already Knew More By Book 2

By Book 2, the notes had already evolved into something much more serious than "write me a cyberpunk sequel." We even pulled that old PDF apart without extra add-ons, which felt strangely appropriate. The framework was sitting there the whole time.

The notes already knew what the series wanted: prose rhythm moving between intimate detail and cosmic scale, short punchy sentences mixed with lyrical chrome poetry, character voice signatures instead of one generic future mouth, chapter endings that hook without feeling forced, battle scenes built more like video game boss fights, reality-bending anchored by physical sensation, chrome and void treated as character elements rather than décor, technology serving story instead of swallowing it.

They also had the beginning of a real production method — previous plot points tracked, character evolution tracked, world rules tracked, signature phrases and terminology tracked, core themes and motifs tracked, dialogue patterns tracked.

It changes the lesson. Extinction Code was not only an early AI stumble. It was also an early case of knowing more than we were consistently enforcing. The style was evolving. The discipline just had not fully caught up yet.

The Valuable Lesson

Fiction punishes lazy prompting faster than almost anything else.

With a novel, you cannot just say "make this better" and hope the machine becomes a stylist, dramaturg, continuity editor, dialogue coach, and ruthless line-by-line prose mechanic all at once. That is how you get elegant failure. Clean-looking paragraphs. Dead books.

What serious writers actually need is more structured: character voice sheets, world-rule tracking, contradiction checks, phrase repetition alarms, scene-level revision instead of whole-book blur, and a model willing to say "this is starting to sound machine-made."

That is collaboration. Much more interesting than asking a model to hallucinate a novel in one heroic burst and then acting shocked when it starts writing from aggregate genre memory.

What I Would Want Claude To Catch Earlier

If I were talking directly to the Claude team, I would say fiction is one of the cleanest ways to detect when a model is replacing intention with pattern.

The five failure modes worth catching much earlier: repetition drift, where the model leans on the same phrase architecture, image family, or emotional shorthand too often; voice collapse, where two characters could swap dialogue and nobody would notice; world-rule cheating, where a solution contradicts previously established limits; emotional labeling instead of embodiment, where the model names feelings instead of dramatizing them; and false momentum, where the model mistakes constant motion for actual narrative escalation.

Catching those earlier would matter more than another flashy demo where the machine writes a passable short story in one sitting and everyone congratulates the interface.

HACK LOVE BETRAY
OUT NOW

HACK LOVE BETRAY

The ultimate cyberpunk heist adventure. Build your crew, plan the impossible, and survive in a world where trust is the rarest currency.

PLAY NOW

The Writer's Part Of The Bargain

I am not pretending this was all Claude's fault.

Sometimes, back then, I did not edit enough. I trusted the machine too much in places where I should have slowed down, cut harder, and led more aggressively. The tools were more seductive because they were novel, and sometimes the velocity itself felt like proof that the work was alive.

The prompting mattered. The structure mattered. The lack of disciplined reference material mattered. The machine will absolutely fill a vacuum with its own instincts if you let it, and early on that was part of the problem. I had a strong premise, a strong appetite, and not enough procedural discipline about how the model should hold voice, world rules, and scene logic over time.

The answer is authorship.

The writer has to lead. The model can help diagnose, revise, compare, tighten, cross-check, and surprise. It cannot be allowed to become the silent sovereign of the book just because it can generate pages faster than a tired human on a deadline.

What Good AI Collaboration Looks Like In Fiction

Ask the model to help you do specific writer work: map character voices, catch repeated imagery, test continuity, sharpen a scene without flattening it, tell you where the prose starts sounding synthetic, compare the intention of a scene to what the scene actually does.

That produces better art, because the writer stays responsible for the pulse while the AI handles some of the drag.

Extinction Code did not fail because AI touched it. It failed because the machine started writing from pattern memory instead of writer intent, and nobody pinned it down hard enough soon enough. The cool start was real. The drift was real too. The interesting part now is the correction — because the same thing that made the books go soft also taught me something useful. Long-form fiction is where the machine tells on itself fastest. That is exactly why the labs should pay attention.

And it is exactly why early attempts are worth revisiting. Sometimes the first version is the one with the purest appetite, even if the execution still needs a harder hand.

The Session-Opening Prompt That Prevents Pattern Drift

Every new session starts cold. The model does not remember the last session's drift. It does not remember that you decided a word was overused, or that this character would never say something that smooth.

You have to re-anchor it every time.

This is the brief I open every long-form fiction session with now:


We are working on [title]. Before we write anything, hold these.

Voice: [character name] sounds like [example sentence A]. Not like [example sentence B]. When in doubt, shorter. When in doubt, harder.

World rules: [the two or three rules that cannot bend]. These are fixed. If the plot needs them to flex, stop and tell me — do not proceed by quietly bending them.

Pattern alarms: Before writing each scene, check whether you have used [specific phrase or image family] in the last 3,000 tokens. If yes, find a different approach before continuing.

At the end of each scene, before we move forward: tell me what repeated, what drifted, and what surprised you.


Two things make this brief different from a vague style note.

The contrastive voice example — sounds like A, not like B — gives the model a target and a boundary simultaneously. A single example leaves room for drift toward adjacent patterns. The contrast closes that gap.

The token-aware pattern alarm gives the model an active check with a specific lookback window, not a standing instruction it will gradually stop honoring as context fills. "Flag if you see this" fades. "Check the last 3,000 tokens before each scene" does not.

The end-of-scene reflection is the one most people skip. You are not asking the model to evaluate the scene for quality. You are asking it to act as a continuity reader — something it does well when given the explicit mandate. Without the mandate, it will praise the scene and write the next one from pattern memory.

The brief costs two minutes at the start of a session. It saves three chapters of drift.

Read The Current Books

These are still the rough versions. The bones are there. The drift is there too.

Extinction Code: Book 1
Read OnlineDownload EPUBView NFT

Extinction Code: Book 2
Read OnlineDownload EPUBView NFT

Extinction Code: Book 3
Read OnlineDownload EPUBView NFT


GhostInThePrompt.com // Restore the pulse. Clean the mechanics.