Agentic Coding: How I Let AI Agents Handle My Entire Book Launch Pipeline
I published a technical book about training large language models. Not a pamphlet — a real book with working code examples, architecture diagrams, and...
I published a technical book about training large language models. Not a pamphlet — a real book with working code examples, architecture diagrams, and enough depth that reviewers on Amazon called it "the guide they wished existed when they started." And I launched it using AI agents for almost every step of the pipeline except the actual writing.
Here's the thing nobody tells you about publishing a technical book: writing the content is maybe 40% of the work. The other 60% is formatting, metadata, marketing copy, social media, email campaigns, and all the tedious operational nonsense that makes most authors abandon their second book before they finish their first.
I decided to throw AI agents at that 60%. Some of it worked brilliantly. Some of it was a disaster. All of it was educational.
What I Mean by "Agentic Coding"
Let me be precise because this term gets thrown around loosely. I'm not talking about using ChatGPT to write a tweet. I'm talking about setting up autonomous or semi-autonomous AI workflows that take a high-level instruction and execute a multi-step process with minimal human intervention.
The difference between "using AI" and "agentic AI" is the difference between asking someone to hand you a wrench and hiring a contractor to renovate your bathroom. The agent doesn't just answer a question — it plans, executes, evaluates, and iterates.
For my book launch, this meant building pipelines where I could say "generate Amazon listing metadata for a technical book about LLM training" and get back a complete package: title variations, subtitle options, keyword lists, category recommendations, and a formatted description with HTML that Amazon's backend actually accepts.
The Pipeline: What AI Agents Actually Handled
Stage 1: Manuscript Formatting
The manuscript was written in Markdown — my preferred format for anything technical because code blocks render properly and version control actually works. But Amazon KDP wants a specific kind of formatted document, and the Kindle format has its own quirks around code rendering, image placement, and table of contents generation.
I built a Node.js script that used Claude to handle the conversion logic:
var fs = require('fs');
var marked = require('marked');
function convertChapter(markdownContent, chapterNumber) {
var prompt = 'Convert this Markdown chapter to KDP-compatible HTML. ' +
'Preserve all code blocks with monospace formatting. ' +
'Add proper heading hierarchy for Kindle navigation. ' +
'Chapter number: ' + chapterNumber;
// Send to Claude API, get formatted HTML back
return callClaudeAPI(prompt, markdownContent);
}
var chapters = fs.readdirSync('./chapters');
chapters.forEach(function(file, index) {
var content = fs.readFileSync('./chapters/' + file, 'utf8');
var formatted = convertChapter(content, index + 1);
fs.writeFileSync('./output/' + file.replace('.md', '.html'), formatted);
});
What worked: The basic conversion was solid. Claude understood Markdown-to-HTML conversion perfectly and produced clean, well-structured output. Code blocks were properly wrapped in <pre><code> tags with appropriate styling.
What didn't work: Kindle's rendering engine is bizarre. Some CSS properties that work fine in browsers get ignored or mangled on Kindle devices. I had to manually test on three different Kindle models and adjust the CSS by hand. The AI had no way to know about Kindle-specific rendering bugs because those aren't well-documented anywhere.
Human intervention required: About 4 hours of manual CSS tweaking and device testing. The AI saved me maybe 20 hours of formatting work, but the last mile was entirely manual.
Stage 2: Cover Design Prompts
I didn't use AI to design the cover directly — the current state of AI image generation still struggles with text rendering on book covers, and a bad cover kills sales faster than anything else. What I did was use AI agents to generate detailed design briefs for a human designer.
The prompt chain looked like this:
- Feed the AI the book's table of contents, synopsis, and target audience
- Ask it to analyze the top 20 bestselling covers in the "Machine Learning" category on Amazon
- Generate three distinct design briefs with color palettes, typography recommendations, layout suggestions, and mood descriptions
The output was genuinely useful. One of the briefs described "a dark navy background with a neural network visualization in gold wireframe, clean sans-serif title typography, subtitle in a lighter weight" — which was close to what we ended up using.
What worked: The competitive analysis was excellent. The AI identified patterns I hadn't consciously noticed — that successful ML book covers overwhelmingly use dark backgrounds with bright accent colors, that sans-serif fonts dominate, that abstract geometric patterns outperform realistic imagery.
What didn't work: The AI's specific color hex code recommendations were mediocre. It picked colors that looked fine in isolation but didn't have enough contrast for thumbnail rendering, which is how most people first see your cover on Amazon. My designer caught this immediately.
Stage 3: Amazon Metadata
This is where agentic AI really earned its keep. Amazon KDP metadata is a tedious but critical optimization problem. You get seven keyword slots, two category selections, a title, subtitle, and a description with limited HTML support. Getting these right is the difference between your book being discoverable and it being buried.
I built an agent workflow that:
- Scraped the top 50 books in related categories for keyword patterns
- Generated 30 candidate keyword phrases ranked by estimated search volume
- Produced five title/subtitle combinations optimized for different search intents
- Wrote three versions of the book description at different lengths
- Recommended primary and secondary categories with reasoning
The whole process ran in about four minutes and produced a structured JSON output that I could review and select from.
var metadata = {
keywords: [
"large language model training",
"LLM fine-tuning tutorial",
"transformer architecture practical guide",
"machine learning model training",
"GPT training from scratch",
"deep learning NLP handbook",
"neural network training pipeline"
],
primaryCategory: "Computer Science > AI & Machine Learning",
secondaryCategory: "Programming > Software Engineering",
description: "<!-- HTML-formatted description here -->"
};
What worked: The keyword research was better than what I would have done manually. The AI identified long-tail phrases like "transformer architecture practical guide" that I wouldn't have thought of but that perfectly match how engineers actually search for this kind of book.
What didn't work: The AI's category recommendations were slightly off. It suggested "Data Science" as a secondary category, but my book is more about engineering than data science. I manually switched it to "Software Engineering," which turned out to be less competitive and gave the book better visibility.
Stage 4: Marketing Copy
This was the stage where I let agents run most autonomously, and the results were mixed.
I set up a pipeline that generated:
- Landing page copy — a full sales page with headline, subheadlines, bullet points, social proof sections, and a call to action
- Email sequences — a five-email launch sequence for my newsletter subscribers
- Social media posts — twenty variations across Twitter/X, LinkedIn, and Reddit formats
- Blog post announcements — two announcement articles for Grizzly Peak Software
For the email sequence, I gave the agent this structure:
Email 1 (Day -7): Teaser - mention the book is coming, share one insight
Email 2 (Day -3): Preview - share a complete code example from Chapter 4
Email 3 (Day 0): Launch - the book is live, direct link, launch pricing
Email 4 (Day +3): Social proof - early reviews, reader feedback
Email 5 (Day +7): Last chance - final reminder about launch pricing
The agent generated all five emails with subject lines, preview text, body copy, and CTAs. Each one was tailored to the specific purpose and had a distinct tone — the teaser was casual and curious, the launch email was energetic and direct, the social proof email was warm and grateful.
What worked: The email sequence was genuinely good. I edited maybe 15% of the copy — mostly to add personal anecdotes the AI couldn't know about, like the story of debugging a training pipeline at 2 AM during an Alaskan winter storm while my generator was running low on fuel. Those human details are what make marketing copy feel real rather than corporate.
What didn't work: The social media posts were the weakest output. The LinkedIn posts were too generic — they sounded like every other "I'm excited to announce" post on the platform. The Reddit posts were better because I specifically instructed the agent to write in a "sharing something useful, not promoting" tone, but even those needed significant editing to not feel like astroturfing.
The blog announcement posts were solid first drafts but lacked the specific voice and anecdotes that make my writing mine. They read like competent technical marketing copy. Not bad. Not me.
Stage 5: Launch Day Automation
On launch day, I had a simple Node.js script that:
- Checked that the Amazon listing was live
- Sent the launch email via my email provider's API
- Posted pre-approved social media content at staggered intervals
- Updated the book promotion banners on Grizzly Peak Software
- Logged everything to a status dashboard
var schedule = require('node-schedule');
var launchTasks = [
{ time: '06:00', task: 'sendLaunchEmail', platform: 'email' },
{ time: '08:00', task: 'postToTwitter', platform: 'twitter' },
{ time: '10:00', task: 'postToLinkedIn', platform: 'linkedin' },
{ time: '12:00', task: 'postToReddit', platform: 'reddit' },
{ time: '14:00', task: 'sendReminderEmail', platform: 'email' }
];
launchTasks.forEach(function(item) {
schedule.scheduleJob(item.time, function() {
console.log('Executing: ' + item.task);
executeLaunchTask(item.task, item.platform);
});
});
This was straightforward automation rather than AI — but the content being posted had all been generated and refined through the agentic pipeline. The AI did the creative work; traditional code handled the scheduling.
The Honest Scorecard
Here's my frank assessment of each stage:
| Stage | AI Contribution | Human Time Saved | Quality Without Editing | |-------|----------------|-----------------|----------------------| | Manuscript Formatting | 85% of the work | ~20 hours | 7/10 | | Cover Design Briefs | 60% of the work | ~5 hours | 6/10 | | Amazon Metadata | 90% of the work | ~8 hours | 8/10 | | Marketing Copy | 70% of the work | ~15 hours | 6/10 | | Launch Automation | 30% (content only) | ~3 hours | 8/10 |
Total estimated time saved: ~51 hours.
That's significant. That's more than a full work week that I got back to spend on things that actually require human judgment — like deciding which code examples best illustrate a concept, or whether a chapter's structure makes sense for someone learning the material for the first time.
What I'd Do Differently
Give the Agent More Context About Voice
The biggest weakness in the marketing copy was voice. The AI produced competent, professional copy that could have been written by anyone. The emails and posts that performed best were the ones I heavily edited to include specific personal details — the Alaska cabin, the 30 years of experience, the honest admission that some chapters took me three rewrites to get right.
Next time, I'd include a detailed voice guide in the agent's instructions: example sentences, phrases I commonly use, topics I reference, the general attitude I bring to technical writing. Basically, a style guide for myself.
Use Different Models for Different Stages
I used Claude for everything because it was convenient. In retrospect, I should have been more strategic:
- Manuscript formatting: Claude (excellent at code and structured transformation)
- Cover design analysis: GPT-4 with web browsing (better at analyzing visual trends)
- Amazon metadata: o1 Pro (better at optimization reasoning with constraints)
- Marketing copy: Claude (better at matching a specific voice when given examples)
- Social media: Honestly, just write these yourself. They take ten minutes and the personal touch matters more here than anywhere else.
Build in Review Checkpoints
I let the pipeline run too autonomously in some stages. The social media posts went from generation to my review queue without any intermediate quality check. A simple scoring step — where a second AI model evaluates the output against criteria before passing it forward — would have caught the generic LinkedIn posts before they reached me.
This is the actual value of agentic architecture: not removing humans from the loop, but putting them at the right points in the loop. I want to review the final marketing copy. I don't want to review the intermediate keyword research. Building the pipeline to reflect those priorities is the engineering challenge.
The Bigger Picture: What Agentic Workflows Mean for Solo Creators
I run Grizzly Peak Software essentially as a one-person operation. I built a job board, a 500-article technical library, a programmatic SEO site at AutoDetective.ai, and now a published book. Ten years ago, any one of those would have been a full-time job. Five years ago, maybe two of them simultaneously with a lot of late nights.
The agentic approach isn't about replacing human creativity or judgment. It's about eliminating the operational overhead that prevents solo creators from shipping. The book was ready for months before I launched it because I was dreading the metadata optimization, the email sequences, the social media planning. That dread evaporated when I realized I could build a pipeline to handle 70% of it.
The 30% that still needs me? That's the fun part. That's the writing, the teaching, the sharing of hard-won knowledge. That's why I wrote the book in the first place.
AI agents didn't write my book. But they launched it. And they did it well enough that I'm already planning the next one — because the launch pipeline is built, tested, and ready to run again.
Shane is the founder of Grizzly Peak Software and the author of a technical book on training large language models. He launches things from a cabin in Alaska, where the wifi is unreliable but the motivation is not.