Just audited 62 Ghost articles. Found 8 with significant repetition. Same core concept explained 2-3 times per article. Multiple conclusion sections. "Ghost Says..." restating the exact intro.
Not vague content. Sharp writing. Good examples. Technical depth. But repetitive structure. AI saying the same thing three different ways.
The audit caught it across articles: linkedin-timing-bomb.md had the premise stated twice (intro plus "Ghost Says..."). 138-books-10-months.md had three wrap-up sections restating the same thesis. five-tokens-fixed-price.md explained the economics three times. the-dilemma.md restated its solution in three separate sections.
All written by AI. All repetitive. All fixable.
This article explains why it happens, how to detect it, and how to prompt against it.
Why Transformers Repeat
Language models train on internet text, and internet text is repetitive by design. Academic papers run abstract → introduction → body → conclusion that restates the abstract. Blog posts run hook → explanation → key takeaway → summary that restates the hook. Business documents run executive summary → details → recommendations → conclusion that restates the summary.
The pattern AI learns: say it three times. Opening, body, ending. Restate for emphasis.
Attention mechanisms compound this. Transformers look at previous tokens to generate the next token. What they track is "what words fit this context based on training patterns." What they don't track is "did I already say this three paragraphs ago?" The model generates coherent text without global awareness of redundancy.
Temperature makes it worse. Low temperature (0.2-0.5) produces more repetitive output — model picks high-probability tokens, safe choices, patterns from training data. High temperature (0.8-1.0) creates more variation but more hallucination risk. You can't optimize all three simultaneously: creativity, accuracy, repetition-avoidance.
Context windows close the trap. The model sees N tokens of context. Long articles exceed that window. Section written at token 15,000 doesn't "remember" similar section at token 3,000. Repetition emerges from architectural blindness, not carelessness.
The Five Repetition Patterns
Multiple Conclusion Sections. The structure looks like: Introduction → Body → "What This Means" → "The Bottom Line" → "Ghost Says..." → Final wrap-up. All say the same thing. Core thesis restated 3-4 times with different headers. From the 138-books audit: section at line 366 said "AI amplified my systematic approach," section at line 440 said "Systematic creativity scales," section at line 457 said "Amplifying systematic creative work." Three sections. Same message. Different words. Read each section header after the body and ask whether it adds new information. If not, delete it.
Premise Restated in Ending. Opening explains core concept. Body gives examples and evidence. "Ghost Says..." restates the core concept from the opening. The opening already explained it — the ending just says it again. From the linkedin-timing-bomb audit: line 14 had "LinkedIn runs on synchronized performance cycles..." and line 226 had "Ran this operation for three months. Built spreadsheet tracking LinkedIn performance cycles..." Same premise, twice, no new information added. Compare opening paragraph to ending section. If they explain the same core concept, rewrite the ending to add new context.
Concept Explained Twice "For Clarity." First thorough technical explanation, then later: let me explain this again more clearly. From the five-tokens audit: lines 40-55 explained the VXX/VIX ratio with math, lines 168-185 explained the same ratio concept again, lines 218-231 restated the economics a third time. Three explanations. One concept. Identify core concepts, count how many times each gets explained, keep the best explanation and delete the rest.
Redundant Bullet Lists. Section A has a list. Section B later has the same list reworded in a different location. From the linkedin audit: lines 48-61 listed performance cycle windows, lines 277-284 listed the same windows again under "Operational Intelligence." Extract all bullet lists, compare items across lists, merge if they cover the same information.
Progressive Scope Narrowing. Introduction explains a broad concept. Section 1 explains the same concept with slightly narrower scope. Section 2 narrower still. "Ghost Says..." restates it as the key insight. Looks like progression — actually just narrowing focus on the same idea repeatedly. The subtle one. Each section feels different because the scope changes, but the core message is identical. Summarize each section in one sentence. If the summaries are variations of the same core thesis, repetition exists. Merge sections or delete the redundant narrowing.
Red Team Prompting Techniques
Explicit anti-repetition instruction. The weak prompt is "Write an article about VXX trading." AI generates multiple explanations of the same VXX concept. The working prompt includes: "CRITICAL: Explain each concept once. No restating the premise in conclusion. One clear ending section only." AI actively avoids repetition when explicitly instructed.
Structural constraints. "Write about AI repetition with intro, body, and conclusion" produces generic structure with built-in repetition. Replace it with explicit structure and content assignments: "Structure: Opening (state problem), Technical Explanation (why it happens), Detection Methods (how to find it), Prevention Techniques (how to avoid it). Each section covers DIFFERENT information. No conclusion section that restates opening."
Token budget allocation. Allocate word counts per section with a requirement that the ending contain a "NEW perspective not covered above." Forces different content per section because the budget prevents redundant restating. AI tracks allocation, becomes aware of section boundaries. Works.
Negative examples. Spell out what not to do: "DO NOT: Restate the premise in the ending. Explain the same concept twice. Have multiple conclusion sections. Use phrases like 'As I mentioned earlier' or 'To reiterate.'" AI learns from explicit anti-patterns as well as positive instructions.