The Blog That Builds Itself
Twenty days of commits, pivots, and debugging sagas had produced a game, but the story of how it got built was more interesting than any feature list. Designing a blog series, building the infrastructure to feed it, hitting the infrastructure's own limits, and refining it into a pipeline that could outlive any single conversation.
Twenty days in. 197 commits. 177 Linear issues. A pile of pivots, debugging sagas, and the occasional midnight existential crisis about C++ linker errors. By day 20 of building Geo Climber, the game was real: playable, in beta, in peoples' hands. The economy worked. The backend worked. Thirteen geological zones ran in a Metal renderer at 60fps. The recording system shipped. The extensibility overhaul shipped. There was a game.
And the thing I kept noticing was that nobody cared about the feature list.
I'd mention the game to friends, to devs on Twitter, to people in Discord servers. "I'm building this iOS vertical climber with Claude Code, it has 13 zones from Earth's core to Everest, combo mechanics, a coin economy, here's a TestFlight link." Polite interest. Cool, sounds neat. Good luck. Next topic.
But when I told the story (how I found a C++ project on GitHub and decided to build an iPhone game despite having zero iOS experience, how Claude and I almost gave up on day 3 when the C++ bridge melted, how a web dev who'd never touched Metal or Swift shipped a playable beta in 16 days) people leaned in. They asked questions. They wanted the next chapter. The story was doing what the feature list couldn't.
"it's not 'try my game', but 'listen to my story' and 'how sick is this'"
That was the thesis that landed on the morning of day 20. The story is the marketing.
The brainstorm
The brainstorm took most of the morning. Claude and I worked through audience, distribution, voice, and structure before writing a single line of code.
Audience: a mix of developers, AI enthusiasts, and indie devs. Not optimized for any single group. The marketing goal is the game; the blog is the natural path to reach people without it feeling like a lame ad.
"definetly a mix. the final goal is marketing my game. the blog is a way to reach different audiences in a more natureal way, that does'nt feel like a lame ad"
Distribution: canonical on geoclimber.app, aggressive cross-posting to LinkedIn, Medium, Reddit, X, and Facebook with per-platform adaptation. All of it automated eventually. I'm not hand-posting the same article to six platforms every week.
Voice: this was the decision that shaped the whole series. The voice should evolve across the posts to match the project's actual arc. Early posts are scrappy because the project was scrappy. Late posts are reflective because the project matured. Posts 1 through 3 get the raw, honest, "I have no idea what I'm doing" energy. Posts 4 through 7 grow into confident pattern-naming. Posts 8 through 10 earn their hot takes. Posts 11 and beyond get to be systematic and mature: detailed breakdowns, process reflection, full thesis claims.
I didn't want a blog that sounded the same on day 1 and day 20. That would be dishonest. The project didn't feel the same.
Structure: chronological spine, topic-driven within each post. Not "here's what happened on Tuesday" but "today I learned this thing and here's the whole story." Mistakes and reversals are the content, not footnotes. This was my refinement of Claude's initial proposal. Claude wanted strict two-day summaries; I wanted subject-driven posts anchored to the timeline.
"a different hybrid - everything is timeline attached, but instead of doing a strict 2 days summary, go by subject, today i learned"
We mapped out 14 posts covering the full 20-day arc. Each one with a thesis, a set of key moments, a target length, and a voice register. The plan was to batch-write everything first, then drip-release chronologically. Not because I was trying to cheat the cadence, but because writing them out of order would lose the emotional progression.
The post-by-post review
This was the richest part of the day.
We walked through each of the 14 planned posts one at a time, and Claude asked questions the commit log couldn't answer. What was the emotional shape of the near-quit on day 3? What happened in the car conversation that started the whole project? Why did the ski trip to Andorra matter? How did the 2,500-line file problem feel different from the linker error problem?
The commit log tells you what. It doesn't tell you why it mattered or how it felt.
For every post, Claude surfaced gaps. The friends conversation that spawned the whole concept: that needed the satirical viral hook context (a deliberately provocative idea I'll spare you the specifics of, the kind of thing that's funny in a car at midnight but doesn't scale beyond the inside joke). The pivot from that satirical framing to Geo Climber's educational geological identity was the real origin story, not the GitHub search.
Or, a friend (someone I run stuff by) came up during the rebrand post. He'd coined the name "Geo Climber" at exactly the right moment. Single line of attribution, not a whole section. Claude's initial draft had inflated his role, framing him as a collaborator. I corrected that.
"downplay this sections a lot. he's helping, but definitely not an equal contributor or collaborator/partner."
IP safety came up immediately. The game evolved from an open-source C++ vertical climber on GitHub, a genre project, MIT-licensed, fully rewritten since. But the specific arcade brand the genre riffs on doesn't get named in public posts. Not because of legal risk, but because of editorial cleanliness. Geo Climber stands on its own.
"don't get me into copyright issues with the story against the original creator by mistake"
The review took hours. And somewhere in the middle of it, the blog series stopped feeling like a plan and started feeling like a thing.
Building the capture system
Then the meta realization hit. The biggest threat to the blog series wasn't writing the posts. It was losing the raw material between sessions.
I can't remember to journal by hand. I've tried. I've failed. Every solo dev journaling system I've ever started has died within a week. The conversations with Claude are the richest source material (they contain the decisions, the emotional reactions, the quotes, the moments where I changed my mind) and they vanish when the session ends.
"i need claude to automatically remember to update notes as we go. from conversations, from code changes, PRs. i need it as a skill maybe"
"this is the core. my conversations are the mosssst important thing"
So we designed a capture system. Three paths feeding one daily file.
Path 1: git post-commit hook. Every commit gets appended to today's journal entry automatically. Timestamp, SHA, message, file count. No human effort. The hook fires on every commit, skips commits that only touch journal/entries/ (self-reference guard against infinite loops), and writes clean markdown.
Path 2: session hooks. When a Claude Code session ends or context gets compacted, a shell script dumps the raw transcript to journal/.raw/ (gitignored, sensitive material). The transcripts are noisy, but they're the insurance policy against memory loss.
Path 3: the blog:journal skill. A manual trigger. Run it at natural break points, and it synthesizes the day's commits, conversations, and raw transcripts into a coherent journal entry. Obsidian-compatible markdown, one file per day, internal-only sources metadata that never gets rendered to blog readers.
The implementation ran through subagent-driven development. One agent per task, code review between each, clean merges. The git hook got a spec review and a code review because it had real bash logic, and the code reviewer caught a subtle bug where awk's -v flag was interpreting backslash escape sequences in commit subjects. Any commit message containing \n or \t would have corrupted the journal. Fixed with ENVIRON[] instead. The kind of bug you never find by reading the code. You find it when a commit message contains fix: escape \n and the journal renders a literal newline.
By the time the capture system was built, the git post-commit hook had already captured every commit of its own construction. The system was running on itself before it was finished.
Building the blog engine
The capture system feeds the blog. The blog needs an engine.
The blog engine is MDX on Next.js, built in the same session via another round of subagent-driven development. Ten tasks across five phases. next-mdx-remote/rsc for loading MDX from a content directory. Compile at build time to satisfy the site's strict Content Security Policy. Tailwind Typography with a custom prose-geo class mapping the plugin's CSS variables to existing brand tokens. Article JSON-LD for structured data, mirroring the existing injection pattern from layout.tsx byte-for-byte.
The /draft-post skill (later renamed blog:draft) was the bridge between raw journal entries and polished blog posts. Give it a journal entry date, and it produces an MDX file in the content directory.
Post 1 went up: "I Found a Game on GitHub." Then the dam broke. Thirteen more drafts followed in a wave, one for each mapped post in the series. By early afternoon, there were 14 blog posts in various states of rough-cut prose.
Then the review started.
The turn
The interactive review of 13 drafts was where the day's shape changed.
It started fine. Post by post, I'd check the voice, the factual accuracy, the IP safety. Post 2's vague "ContentView or similar" became the specific ContentView.swift. Post 3's forward references to future posts got cut (broken links waiting to happen). Post 9 had the zone art pipeline cost wrong. The real number was $25, not the $400 the draft claimed. Each fix was reasonable. Each post improved.
But by post 7 or 8, I noticed the same issues firing repeatedly. Day numbers spelled out as words ("day one" instead of "day 1"). Forward references to specific future posts. Or's role inflated from "a friend" to "informal collaborator." Fortune-cookie thesis lines, bold sentences at section ends that generalized beyond the post's content in the way that AI-written prose loves and readers distrust. And once, a code snippet that leaked the original project's directory name.
Every post had some combination of the same five or six issues. Catching them individually was working. Scaling that to a series that would keep producing posts for months was not. I was being the linter, and I was already tired of it.
That's not an editorial problem. That's a missing system.
The refinement
The pattern was clear. If the same issues keep surfacing, they should be rules. If the rules exist, a review should enforce them. If the review runs on every draft, it should run automatically. And if drafts are being reviewed iteratively, they need a staging area, not the public content directory where a half-finished file could accidentally go live.
Four missing pieces, one coherent shape.
The loose skills collapsed into a four-stage blog plugin: blog:journal for capture, blog:suggest for proposing new post candidates from journal entries, blog:draft for turning a candidate into a full MDX draft, blog:review for applying the editorial checklist. The draft skill auto-chains into the review skill. You can't draft without being reviewed.
"no need for extra post prefix if going plugin, because it will now be blog:journal blog:suggest blog:draft etc."
The rules themselves became persistent files in journal/style/. Six files seeded from the lessons of the draft wave: voice progression by post position, how to frame people and characters, series structure (day forms, forward references, dynamic numbering), content pacing (trim saga walkthroughs, specific facts not placeholders), IP safety, and a machine-readable review checklist with severity classifications. Every finding from the review wave (every "day one" that should be "day 1," every forward reference that could break, every inflated character framing) became a rule in one of these files.
The style directory grows over time. Every editorial decision becomes a persistent rule, so each subsequent review starts from an accumulated standard rather than discovering rules from scratch. It works the same way a codebase does. You don't re-debate naming conventions on every PR if there's a linter enforcing them.
"journal should get another subdir - drafts, which should be destination for the blog:draft blog:review workflow"
Drafts now land in journal/drafts/, get reviewed by blog:review, and only move to the public content directory when I approve them. The staging area solved the accidental-publish risk and gave the review loop a stable target. Write, review, fix, review again, promote.
Dynamic numbering
A smaller but load-bearing decision emerged from the draft review. Every post had a hardcoded postNumber in its frontmatter and a manually written footer: "This is post 12 of 14 in a series about building Geo Climber with Claude Code." Fourteen copies of that boilerplate, each with a different number.
Then I deleted a post. The speculative AI player post (Post 13 in the original plan) got scrapped because the actual AI player work was happening in a parallel Claude session and the post needed to be written after the work concluded, not before. Suddenly every post numbered above 12 was wrong. Post 14 still said "Post 14" even though it would publish before the missing Post 13.
"all of the 'Post 12 of 14' and 'This is post 12 of a series about building Geo Climber with Claude Code. The recording system shipped...' should be fully dynamic."
Full restructure. Removed postNumber from the frontmatter type entirely. Added getPostDisplayNumber(slug) that computes a 1-based position from ascending publishedAt order. Extracted the footer boilerplate into a PostFooter component that takes displayNumber, totalPosts, and a teaser field from the frontmatter. Stripped the hardcoded footer from all 13 MDX files in one pass.
Publishing a post is now just letting its publishedAt date arrive. No status field, no manual numbering, no scheduled rebuilds. Request-time filtering does the rest. The series can grow or shrink without touching a single existing file.
The backfill
The pipeline was four stages: journal, suggest, draft, review. But it only knew about the future. The 12 posts already drafted and reviewed existed as MDX files in the content directory. The pipeline had no record of them as candidates with briefs, journal references, and editorial context.
"journal candidates need backfill by splitting each entry to what it's supposed to be, and by reading the current approved blogposts"
One candidate file per existing post, retrospectively. Not because the posts needed rewriting, but because the system needed to know those posts exist and what they were supposed to be. A content pipeline that only tracks future work is half a pipeline. The candidate directory went from "where new ideas live" to "the complete index of what the series is."
Then the entry retrofit. The journal entries had accumulated candidate-level fields during earlier brainstorming: title, post_refs, blog_status. Those don't belong in entries. Entries are raw capture. Candidates own the editorial metadata.
"current entries contain candidates logic.. 'title' does not belong there."
Stripped every entry back to its capture-only format. One-way link: candidates reference entries via journalRefs, entries don't know about candidates. Clean separation of concerns. The raw material doesn't know what it's for; the editorial layer decides.
Collapsing the arc closer
Late in the evening, the plan had two remaining posts mapped for day 20: Post 13 covering the morning brainstorm and capture system, Post 14 covering the afternoon blog engine and refinement. Two posts about the same day's work.
I looked at them and realized the split was artificial. The morning wasn't a separate story. The brainstorm, the capture system, the blog engine, the editorial fatigue, the refinement into a pipeline. That's one arc. The thesis that started the morning ("the story is the marketing") runs through every decision in the afternoon. Splitting it would have meant either repeating the thesis twice or leaving one half without a through-line.
"collapse to one by rm the draft, and merging the candidate."
One arc closer. This post.
The recursion (once, briefly)
The conversation that designed the capture pipeline was being captured by the pipeline as it was being built. The git post-commit hook logged every commit of its own construction. The journal entry for day 20 was the first entry ever written by the blog:journal skill. And this post, the last post of the initial arc, was the first thing ever drafted for the pipeline by the pipeline about the pipeline.
That's a neat beat. It's not the thesis.
The pipeline goes live
Day 21 morning: PR #41 squash-merged to master. 45 commits, 112 files changed. The journal capture system, the blog engine, the blog plugin with four stages, the candidate backfill, the entry retrofit, the dynamic numbering, the style directory. All of it in one branch, shipped clean.
Posts 1 and 2 went live as the opening pair. The marketing site got a mobile-first overhaul. The hero was wonky on phones, and it's a mobile game after all. The homepage went from roughly 3,500 words to 800, tighter conversion funnel, the blog carrying content depth instead of the landing page.
Day 22: the pipeline produced for real. blog:suggest read the journal entries from recent sessions and proposed three new candidates. blog:draft launched all three drafts in parallel via subagents, each reading the candidate brief and the relevant journal entries and the style guide. blog:review caught the expected patterns: a "day one" that should be "day 1," a teaser that duplicated the description field, a section that ran long. The pipeline didn't produce perfect drafts. It produced reviewable drafts with a consistent baseline, which is exactly what it was designed to do.
Then syndication. The four-stage pipeline had a gap: posts went from drafted to published, and then I was supposed to manually cross-post them to Reddit, Hacker News, LinkedIn, and Twitter. That manual step was the same kind of friction that led to the whole pipeline in the first place.
blog:syndicate fills the gap. It reads platform-specific instructions (each platform has a research corpus and a set of rules about what works there) and generates tailored versions of each post. Not identical reposts. Platform-native adaptations. The Reddit version for r/gamedev reads differently from the Hacker News version, which reads differently from the LinkedIn version.
blog:research-subreddit feeds the syndication skill: point it at a subreddit, it fetches top and hot posts, analyzes what resonates, and produces an instructions.md that tells the syndication skill how to write for that community.
Six stages now. Journal, suggest, draft, review, publish, syndicate. From raw conversation capture to platform-tailored cross-posting. The first two stages are automatic (the git hook fires on every commit). The last four are one command each.
What the arc was really about
The game is still the thing.
Geo Climber exists. It runs on iPhones. People climb through 13 geological zones from the inner core to the death zone on Everest. The combos feel satisfying. The economy works. The backend scales. An AI is learning to play it. The blog exists because the story of building all of that is worth telling, and the story is worth telling because the game is real. Flip that relationship and you get a content marketing exercise about a demo project. The game is the reason any of this matters.
But the blog itself taught me something I wasn't expecting. The capture system, the editorial pipeline, the style directory, the staging workflow: these aren't blog infrastructure. They're a knowledge management system for a solo dev project that happens to produce blog posts as a side effect. The journal entries are useful to me regardless of whether anyone reads the blog. The candidate briefs help me plan what to write about next. The review checklist catches drift in my own thinking. The pipeline is a tool I built for the blog, and it turns out it's also a tool I built for myself.
"The story is the marketing. You listened. I made the thing. Here it is."
That was the thesis from the morning. Fourteen posts later (twelve published, two more drafted, a pipeline producing the next batch) the thesis holds. People don't tune in for feature lists. They tune in for the story of someone building something real, making mistakes, fixing them, and shipping anyway. The infrastructure exists to make that sustainable. The pipeline exists so the story doesn't stop when the momentum does.
The AI player experiment is in progress. The App Store launch is next. The series continues as the project continues. Thanks for reading this far.
This is post 16 of 18 in a series about building Geo Climber with Claude Code. The content pipeline is live. Six stages, from journal capture to cross-platform syndication, all built in the same conversation that designed the series it feeds. Join the Discord and download Geo Climber on the App Store.