AI

Building AI SaaS in 24 Hours: Wild Experiments with $0 Marketing Budgets

I have a problem. Every time I get an idea for a tool I think is interesting, I want to build it immediately. Not plan it. Not validate it extensively....

I have a problem. Every time I get an idea for a tool I think is interesting, I want to build it immediately. Not plan it. Not validate it extensively. Build it, get it running, see if it's actually useful.

This habit has produced some embarrassing failures and at least a few things I'm genuinely proud of. More importantly, it's produced a set of patterns for rapid-building that I've refined over time — specifically for AI-assisted SaaS tools built with tight time and budget constraints.

Cap'n Crunch: The Whistle That Started It All — John Draper and the Birth of Hacking

Cap'n Crunch: The Whistle That Started It All — John Draper and the Birth of Hacking

A cereal box whistle hacked AT&T's phone network. John Draper's story—from engineering genius to Apple's prehistory to complicated downfall. The full truth.

Learn More

This is the teardown.


Why 24 Hours Is the Right Constraint

Speed constraints force prioritization in a way that extended timelines don't. When you have a week, you'll spend two days on architecture decisions, a day on the perfect color scheme, and half a day configuring your development environment. When you have 24 hours, you make fast decisions and keep moving.

The 24-hour frame also provides a natural exit condition. If it isn't interesting enough to spend 24 focused hours on, it isn't interesting enough to build. If it is worth building, 24 hours produces something you can put in front of real users.

This matters because the most common failure mode for side projects isn't shipping something bad. It's never shipping at all.


The Stack That Makes 24-Hour Builds Possible

The stack you build with repeatedly compounds in value. Every new project is faster because you're not making tech choices, you know where things go, and you have solved variations of the same problems before.

My default stack for rapid AI SaaS builds:

Express.js            — API and server-side rendering
EJS                   — Templates (I know, I know — it ships fast)
Bootstrap             — Styling without decisions
PostgreSQL            — Relational data, surprisingly flexible
Claude API / Anthropic SDK — The AI layer
DigitalOcean          — $5/month droplet gets you started
PM2                   — Process management
Nginx                 — Reverse proxy, SSL termination

Nothing exotic. Everything well-documented. Every component I can configure from memory without looking up syntax.

The tooling choices that used to feel like constraints now feel like freedom. When you're not making foundational decisions, you're building features.


The Anatomy of a 24-Hour Build

Here's how the hours actually distribute across a typical build.

Hour 0-2: Definition and Scaffold

The build starts before a single line of code. The most important work is getting the problem statement clear enough that you won't change your mind about what you're building halfway through.

I write a one-page spec. Not a detailed requirements doc — a paragraph describing the core user action and value, a list of the essential features (max 5), and a list of explicit non-goals.

Then scaffold the project:

mkdir my-new-thing
cd my-new-thing
npm init -y
npm install express ejs pg @anthropic-ai/sdk dotenv
mkdir views routes utils
touch app.js .env

Database schema design happens here too. Getting this wrong costs time later. Getting it right — even roughly — saves it.

Hour 2-6: Core Feature Loop

The part that determines whether the project succeeds or fails. Build the single most essential feature — the core loop that the entire tool exists to serve — before touching anything else.

For an AI SaaS tool, this usually means:

  1. The form or input mechanism
  2. The prompt construction logic
  3. The Claude API call
  4. The response display

Everything else is scaffolding. If the core loop doesn't work or isn't interesting, nothing else matters.

Example from a recent project — a tool that analyzes Express.js codebases for security vulnerabilities:

// routes/analyze.js
const express = require('express');
const router = express.Router();
const Anthropic = require('@anthropic-ai/sdk');
const client = new Anthropic();

router.post('/analyze', async (req, res) => {
  const { code } = req.body;

  if (!code || code.trim().length === 0) {
    return res.render('analyze', { error: 'Please paste some code to analyze.' });
  }

  try {
    const message = await client.messages.create({
      model: 'claude-sonnet-4-20250514',
      max_tokens: 2000,
      messages: [{
        role: 'user',
        content: `You are a security expert specializing in Node.js and Express.js applications. 

Analyze the following code for security vulnerabilities. For each issue found:
1. Name the vulnerability type
2. Explain the risk in plain language
3. Show the specific vulnerable code
4. Provide a corrected code example

If no significant vulnerabilities are found, say so clearly and explain why the code appears safe.

Code to analyze:
\`\`\`javascript
${code}
\`\`\`

Format your response as structured sections, one per vulnerability found.`
      }]
    });

    const analysis = message.content[0].text;
    res.render('result', { analysis, originalCode: code });
  } catch (error) {
    console.error('Analysis error:', error);
    res.render('analyze', { error: 'Analysis failed. Please try again.' });
  }
});

module.exports = router;

This is the core feature. It works. It produces something useful. Everything else builds from here.

Hour 6-12: The Boring Important Stuff

With the core working, the second phase is the infrastructure that makes it actually usable:

  • Input validation — Don't trust user input, ever
  • Error handling — Graceful failures that don't expose internals
  • Rate limiting — You will get abused without this
  • Basic auth — If there's any user state or paid features
  • Logging — You need to know what's happening in production
// Basic rate limiting without a heavy library
const requestCounts = new Map();

function rateLimit(maxRequests, windowMs) {
  return (req, res, next) => {
    const ip = req.ip;
    const now = Date.now();
    const windowStart = now - windowMs;

    if (!requestCounts.has(ip)) {
      requestCounts.set(ip, []);
    }

    const requests = requestCounts.get(ip).filter(time => time > windowStart);
    requests.push(now);
    requestCounts.set(ip, requests);

    if (requests.length > maxRequests) {
      return res.status(429).render('error', { 
        message: 'Too many requests. Please wait a moment before trying again.' 
      });
    }

    next();
  };
}

// Apply to AI endpoints
app.post('/analyze', rateLimit(10, 60000), analyzeRoute);

Hour 12-18: Polish and Edge Cases

The features that feel obvious after you've used the thing for a few hours:

  • What happens when the AI returns something malformed?
  • What if the user's input is too long for the context window?
  • Is the loading state clear enough that users don't click submit twice?
  • Does it work on mobile without looking broken?

This phase is also where I typically add the one or two features that emerged from actually using the core loop. There's always something.

Hour 18-22: Deployment

DigitalOcean droplet, Nginx config, Let's Encrypt SSL, PM2 for process management. I have a deployment playbook I follow the same way every time.

# On the droplet
git clone [repo]
cd [project]
npm install --production
cp .env.example .env  # Edit with real values

# Start with PM2
pm2 start app.js --name my-new-thing
pm2 save
pm2 startup

# Nginx config
# /etc/nginx/sites-available/my-new-thing
server {
    listen 80;
    server_name yourdomain.com;

    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Hour 22-24: Getting It in Front of People

$0 marketing budget doesn't mean no distribution. It means earned distribution only.

Post-launch workflow:

  1. Post to Hacker News (Show HN) with an honest, specific description of what it does
  2. Share in relevant Discord servers and Slack communities — not as spam, as "here's something I built, would love feedback"
  3. Write a short thread on X walking through what you built and why
  4. Post to relevant subreddits (r/webdev, r/SideProject, r/IndieHackers)

The key is specificity. "I built an AI tool" gets ignored. "I built a tool that analyzes your Express.js code for OWASP Top 10 vulnerabilities and shows you how to fix them" gets looked at.


The $0 Marketing Reality Check

I've run paid ads. I ran Facebook campaigns on AutoDetective.ai and got clicks. I did not get meaningful returns relative to spend.

The fundamental problem with paid ads for small SaaS tools: the unit economics don't work until you have proven conversion rates and a clear LTV calculation. Without those numbers, you're running an expensive experiment with money you probably can't afford to lose.

What actually works with zero budget:

Being specific enough to appear in organic searches. A tool that does one thing well and is described clearly will get found by the people who need it. This is slow. It's also free and compounding.

Building in public. Document what you're building, the decisions you're making, the problems you're solving. This creates content that attracts the audience most likely to find your tool useful.

Solving a problem the community already talks about. The best distribution is when someone mentions the exact problem your tool solves in a forum and someone else links to you. This only happens if your tool is specific enough to match specific problems.

Making it genuinely shareable. Tools that produce outputs people want to share — reports, analyses, visualizations — get shown to other people organically. Build the share mechanic in.


What I've Learned from the Failures

Not every 24-hour build is worth continuing. Some die in production and should.

Signs a build should be abandoned:

  • Nobody uses it within the first two weeks, even with active promotion
  • The only people using it are looking for something slightly different
  • The core loop isn't interesting enough that you want to keep improving it

Signs it's worth continuing:

  • Organic, unprompted shares within the first week
  • Users who come back more than once
  • People asking for the one specific feature you didn't build yet

The 24-hour build frame is designed to fail fast enough that the cost of learning is low. A bad idea killed in 24 hours is infinitely better than a bad idea dragged through six months of development.


The Current Build: What's Next

I'm not going to give away the specific project I'm building next (see: learning from past mistakes about announcing early). But the pattern I'm currently most interested in exploring:

Tools that produce things engineers need to share with non-technical stakeholders. Analysis reports, compliance documentation, architecture summaries. The audience for these tools is engineers who are tired of translating technical reality into something their organization can act on.

That's a specific problem, with a specific audience, that AI is genuinely positioned to help with. The kind of problem worth spending 24 hours on.


Shane is the founder of Grizzly Peak Software. He builds AI tools from a cabin in Alaska and writes about what he's learned.

Powered by Contentful