AI

One-Click Deploy Agents: Streamlining Vercel + Supabase Setups

I got tired of the same deployment ritual. Create a new Vercel project. Connect the GitHub repo. Set up environment variables. Spin up a new Supabase...

I got tired of the same deployment ritual. Create a new Vercel project. Connect the GitHub repo. Set up environment variables. Spin up a new Supabase project. Copy the connection string. Run the schema migration. Configure the auth settings. Update the environment variables again. Test the deployment. Fix the thing I forgot. Redeploy.

For one project, that's 20 minutes of clicking around dashboards. When you're prototyping three ideas a week — which I do regularly, because most ideas die fast and that's fine — it's an hour of mindless clicking that could be automated. So I automated it. I built a deploy agent that takes a repo URL and a project name and does everything else.

Work Smarter with Claude Code: Automate Tasks, Manage Projects, and Run Operations—No Coding Required

Work Smarter with Claude Code: Automate Tasks, Manage Projects, and Run Operations—No Coding Required

AI that sees your files and does the work. Organize chaos, automate tasks, escape spreadsheet hell. No coding required. Practical guide for knowledge workers.

Learn More

Here's how it works, what went wrong along the way, and when you should absolutely not use something like this.


The Problem With Manual Deployment in 2026

The modern deployment stack has gotten remarkably good. Vercel, Netlify, Railway, Supabase, PlanetScale, Neon — these platforms have reduced deployment complexity by an order of magnitude compared to what we dealt with ten years ago. But there's a paradox: the easier each individual step becomes, the more steps we add to our deployment pipelines.

A typical full-stack project in 2026 needs:

  • A frontend hosting platform (Vercel, Netlify)
  • A database (Supabase, PlanetScale, Neon)
  • An auth provider (often bundled with the database)
  • Environment variable configuration across multiple services
  • DNS and domain setup
  • CI/CD pipeline configuration
  • Possibly an edge function runtime
  • Possibly a storage bucket for file uploads

Each of these has an excellent dashboard. Each dashboard takes 3-5 minutes to configure. Multiply that across 8 services and you're looking at 30-40 minutes of context-switching between browser tabs, copying API keys, and hoping you didn't paste the staging key into production.

This is exactly the kind of repetitive, well-defined, multi-step process that AI agents are good at.


What a Deploy Agent Actually Does

Let me be specific about what I built, because "AI agent for deployment" sounds either terrifyingly complex or uselessly vague depending on your perspective. It's neither.

A deploy agent is a script that:

  1. Takes structured input (repo URL, project name, configuration preferences)
  2. Makes API calls to deployment platforms in the correct sequence
  3. Passes outputs from one step as inputs to the next (like passing Supabase credentials to Vercel environment variables)
  4. Handles errors and retries where appropriate
  5. Reports the final state — deployed URL, database connection info, any failures

There's no magic here. It's API orchestration with some conditional logic. The "AI" part comes in if you want natural language input ("deploy my Next.js app with a Postgres database and auth") or if you want the agent to make decisions about configuration based on the project structure.

I built mine without the natural language layer. It's a Node.js script with a config file. Here's the core structure:

var axios = require("axios");
var fs = require("fs");

var config = {
    vercelToken: process.env.VERCEL_TOKEN,
    supabaseToken: process.env.SUPABASE_ACCESS_TOKEN,
    supabaseOrgId: process.env.SUPABASE_ORG_ID,
    githubToken: process.env.GITHUB_TOKEN
};

function deployProject(options) {
    var projectName = options.name;
    var repoUrl = options.repo;
    var schemaFile = options.schema || null;

    console.log("Starting deployment for: " + projectName);

    return createSupabaseProject(projectName)
        .then(function(supabase) {
            console.log("Supabase project created: " + supabase.id);
            return applySchema(supabase, schemaFile)
                .then(function() {
                    return supabase;
                });
        })
        .then(function(supabase) {
            var envVars = buildEnvVars(supabase);
            return createVercelProject(projectName, repoUrl, envVars);
        })
        .then(function(vercel) {
            console.log("Vercel project created: " + vercel.url);
            return triggerDeploy(vercel.id);
        })
        .then(function(deployment) {
            console.log("Deployment complete: " + deployment.url);
            return deployment;
        })
        .catch(function(err) {
            console.error("Deployment failed: " + err.message);
            throw err;
        });
}

Nothing revolutionary. But the value isn't in the code — it's in the orchestration.


Setting Up the Vercel API Integration

Vercel has a solid REST API. You need a personal access token or a team token, and then you can do pretty much anything the dashboard does.

Creating a project and linking it to a GitHub repo:

function createVercelProject(name, repoUrl, envVars) {
    var repoParts = parseGitHubUrl(repoUrl);

    var payload = {
        name: name,
        framework: "nextjs",
        gitRepository: {
            type: "github",
            repo: repoParts.owner + "/" + repoParts.repo
        },
        environmentVariables: envVars.map(function(v) {
            return {
                key: v.key,
                value: v.value,
                target: ["production", "preview", "development"],
                type: "encrypted"
            };
        })
    };

    return axios.post("https://api.vercel.com/v10/projects", payload, {
        headers: {
            "Authorization": "Bearer " + config.vercelToken,
            "Content-Type": "application/json"
        }
    }).then(function(res) {
        return res.data;
    });
}

function parseGitHubUrl(url) {
    // Handles both https://github.com/owner/repo and [email protected]:owner/repo
    var match = url.match(/github\.com[/:]([\w.-]+)\/([\w.-]+?)(?:\.git)?$/);
    if (!match) throw new Error("Invalid GitHub URL: " + url);
    return { owner: match[1], repo: match[2] };
}

function triggerDeploy(projectId) {
    return axios.post(
        "https://api.vercel.com/v13/deployments",
        { name: projectId, target: "production" },
        {
            headers: {
                "Authorization": "Bearer " + config.vercelToken,
                "Content-Type": "application/json"
            }
        }
    ).then(function(res) {
        return res.data;
    });
}

One gotcha: Vercel's API requires your GitHub account to be connected to your Vercel account first. You can't do that via API — it's an OAuth flow that has to happen in the browser once. After that, the API can create projects linked to any of your GitHub repos.


The Supabase Side

Supabase's Management API lets you create projects, manage databases, and configure auth — all programmatically. This is the part that saves the most time because Supabase project creation involves waiting for the database to provision, which takes 1-2 minutes.

function createSupabaseProject(name) {
    var payload = {
        name: name,
        organization_id: config.supabaseOrgId,
        plan: "free",
        region: "us-west-1",
        db_pass: generateSecurePassword()
    };

    return axios.post(
        "https://api.supabase.com/v1/projects",
        payload,
        {
            headers: {
                "Authorization": "Bearer " + config.supabaseToken,
                "Content-Type": "application/json"
            }
        }
    ).then(function(res) {
        var project = res.data;
        // Wait for project to be ready
        return waitForProject(project.id).then(function() {
            return getProjectCredentials(project.id);
        });
    });
}

function waitForProject(projectId) {
    return new Promise(function(resolve, reject) {
        var attempts = 0;
        var maxAttempts = 30;

        var check = function() {
            axios.get(
                "https://api.supabase.com/v1/projects/" + projectId,
                {
                    headers: {
                        "Authorization": "Bearer " + config.supabaseToken
                    }
                }
            ).then(function(res) {
                if (res.data.status === "ACTIVE_HEALTHY") {
                    resolve(res.data);
                } else if (attempts >= maxAttempts) {
                    reject(new Error("Project provisioning timed out"));
                } else {
                    attempts++;
                    setTimeout(check, 5000);
                }
            }).catch(reject);
        };

        check();
    });
}

function getProjectCredentials(projectId) {
    return axios.get(
        "https://api.supabase.com/v1/projects/" + projectId + "/api-keys",
        {
            headers: {
                "Authorization": "Bearer " + config.supabaseToken
            }
        }
    ).then(function(res) {
        var keys = res.data;
        var anonKey = keys.find(function(k) { return k.name === "anon"; });
        var serviceKey = keys.find(function(k) { return k.name === "service_role"; });

        return {
            id: projectId,
            url: "https://" + projectId + ".supabase.co",
            anonKey: anonKey ? anonKey.api_key : null,
            serviceKey: serviceKey ? serviceKey.api_key : null
        };
    });
}

The waitForProject function is critical. Supabase database provisioning is asynchronous — you create the project and then poll until it's healthy. Without this, you'd try to apply your schema to a database that doesn't exist yet.


Wiring the Pieces Together

The real value of the agent is in the handoff between services. Supabase gives you credentials. Those credentials become Vercel environment variables. The schema file gets applied to the database. Everything chains together.

function buildEnvVars(supabase) {
    return [
        { key: "NEXT_PUBLIC_SUPABASE_URL", value: supabase.url },
        { key: "NEXT_PUBLIC_SUPABASE_ANON_KEY", value: supabase.anonKey },
        { key: "SUPABASE_SERVICE_ROLE_KEY", value: supabase.serviceKey },
        { key: "DATABASE_URL", value: buildConnectionString(supabase) }
    ];
}

function buildConnectionString(supabase) {
    return "postgresql://postgres:" + supabase.dbPassword
        + "@db." + supabase.id + ".supabase.co:5432/postgres";
}

function applySchema(supabase, schemaFile) {
    if (!schemaFile) {
        console.log("No schema file specified, skipping migration");
        return Promise.resolve();
    }

    var schema = fs.readFileSync(schemaFile, "utf8");

    return axios.post(
        "https://" + supabase.id + ".supabase.co/rest/v1/rpc/exec_sql",
        { query: schema },
        {
            headers: {
                "Authorization": "Bearer " + supabase.serviceKey,
                "apikey": supabase.anonKey,
                "Content-Type": "application/json"
            }
        }
    ).then(function() {
        console.log("Schema applied successfully");
    });
}

One thing I learned the hard way: the schema application step needs error handling around individual statements. If your schema file has CREATE TABLE IF NOT EXISTS statements you're mostly safe, but if it has ALTER TABLE commands that depend on prior state, one failure can cascade. I now split schema files into individual statements and apply them sequentially with independent error handling for each.


Adding Intelligence: When the Agent Makes Decisions

The basic version above is pure orchestration — it does what you tell it in what order. The more interesting version analyzes the project before deploying.

function detectProjectConfig(repoPath) {
    var config = {
        framework: null,
        needsDatabase: false,
        needsAuth: false,
        buildCommand: null,
        outputDir: null
    };

    // Check package.json for framework
    var packageJson = JSON.parse(
        fs.readFileSync(repoPath + "/package.json", "utf8")
    );
    var deps = Object.assign(
        {},
        packageJson.dependencies || {},
        packageJson.devDependencies || {}
    );

    if (deps["next"]) {
        config.framework = "nextjs";
        config.buildCommand = "next build";
        config.outputDir = ".next";
    } else if (deps["nuxt"]) {
        config.framework = "nuxtjs";
        config.buildCommand = "nuxt build";
        config.outputDir = ".output";
    } else if (deps["@sveltejs/kit"]) {
        config.framework = "sveltekit";
        config.buildCommand = "vite build";
        config.outputDir = "build";
    }

    // Check for database usage
    if (deps["@supabase/supabase-js"] || deps["pg"] || deps["prisma"]) {
        config.needsDatabase = true;
    }

    // Check for auth patterns
    if (deps["@supabase/auth-helpers-nextjs"] || deps["next-auth"]) {
        config.needsAuth = true;
    }

    // Check for schema files
    var schemaLocations = [
        "/supabase/migrations",
        "/prisma/schema.prisma",
        "/db/schema.sql"
    ];

    schemaLocations.forEach(function(loc) {
        if (fs.existsSync(repoPath + loc)) {
            config.schemaPath = repoPath + loc;
        }
    });

    return config;
}

This is where you could layer in an actual LLM call — have Claude or GPT-4 analyze the project structure and make deployment recommendations. I've experimented with this and it works surprisingly well for detecting things like "this project has a .env.example file with 12 variables, here's what each one probably needs." But honestly, for most projects the deterministic detection above handles 90% of cases without the API cost or latency.


Error Handling and Rollback

Deployment agents that create real infrastructure need to clean up after themselves when things fail. If Supabase creation succeeds but Vercel creation fails, you've got an orphaned database project burning through your free tier quota.

function deployWithRollback(options) {
    var createdResources = [];

    return createSupabaseProject(options.name)
        .then(function(supabase) {
            createdResources.push({
                type: "supabase",
                id: supabase.id
            });
            return applySchema(supabase, options.schema)
                .then(function() { return supabase; });
        })
        .then(function(supabase) {
            var envVars = buildEnvVars(supabase);
            return createVercelProject(options.name, options.repo, envVars);
        })
        .then(function(vercel) {
            createdResources.push({
                type: "vercel",
                id: vercel.id
            });
            return triggerDeploy(vercel.id);
        })
        .catch(function(err) {
            console.error("Deployment failed, rolling back...");
            return rollback(createdResources).then(function() {
                throw err;
            });
        });
}

function rollback(resources) {
    var rollbacks = resources.map(function(resource) {
        if (resource.type === "supabase") {
            return axios.delete(
                "https://api.supabase.com/v1/projects/" + resource.id,
                {
                    headers: {
                        "Authorization": "Bearer " + config.supabaseToken
                    }
                }
            ).catch(function(err) {
                console.error("Rollback failed for Supabase: " + err.message);
            });
        }
        if (resource.type === "vercel") {
            return axios.delete(
                "https://api.vercel.com/v9/projects/" + resource.id,
                {
                    headers: {
                        "Authorization": "Bearer " + config.vercelToken
                    }
                }
            ).catch(function(err) {
                console.error("Rollback failed for Vercel: " + err.message);
            });
        }
        return Promise.resolve();
    });

    return Promise.all(rollbacks);
}

When to Use This vs. Manual Deployment

I want to be honest about when a deploy agent makes sense and when it's over-engineering.

Use a deploy agent when:

  • You're deploying the same stack repeatedly (agency work, multiple client projects, rapid prototyping)
  • Your team has a standardized stack and onboarding new projects is a bottleneck
  • You're building a platform that provisions infrastructure for users (SaaS with dedicated databases per tenant)
  • You're doing hackathons or rapid prototyping where 20 minutes of setup is 20% of your available time

Don't use a deploy agent when:

  • You're deploying one project once. Just use the dashboard. Seriously.
  • Your deployment involves complex, one-off infrastructure decisions. Agents are good at repetitive patterns, not novel architecture
  • You don't understand the manual process yet. Automating something you don't understand means you can't debug it when it breaks
  • The platforms change their APIs frequently enough that maintenance costs exceed time saved

For me, the breakeven was around the fourth project. The first three projects were faster to deploy manually than to build and debug the agent. But from project four onward, every deployment was under two minutes instead of twenty. That math works out fast when you're prototyping aggressively.


Security Considerations

One thing that keeps me up at night about deploy agents: they hold the keys to everything. Your Vercel token can delete production deployments. Your Supabase token can drop databases. Your GitHub token can access private repos.

I handle this a few ways:

  1. Scoped tokens — Use the most restrictive token scope possible. Vercel lets you create tokens scoped to specific teams. Supabase access tokens can be limited in scope
  2. No tokens in code — Everything comes from environment variables, never hardcoded. The deploy script runs on my local machine, not in any CI pipeline
  3. Dry-run mode — The agent has a --dry-run flag that prints every API call it would make without executing any of them. I always dry-run first on a new project configuration
  4. Confirmation prompts — Before any destructive operation (project deletion during rollback), the agent prompts for confirmation unless explicitly running in non-interactive mode
var readline = require("readline");

function confirm(message) {
    if (process.env.DEPLOY_NONINTERACTIVE === "true") {
        return Promise.resolve(true);
    }

    var rl = readline.createInterface({
        input: process.stdin,
        output: process.stdout
    });

    return new Promise(function(resolve) {
        rl.question(message + " (y/n): ", function(answer) {
            rl.close();
            resolve(answer.toLowerCase() === "y");
        });
    });
}

What's Next: Multi-Cloud and Preview Environments

The current version handles Vercel + Supabase, which covers maybe 60% of my prototyping needs. The next iteration adds support for Railway (for projects that need a traditional server), Cloudflare Workers (for edge-first architectures), and Neon (as a Supabase alternative when I don't need auth or storage).

The more interesting extension is automatic preview environment creation on pull requests. When a PR opens, the agent spins up a preview deployment with its own isolated database seeded with test data. When the PR closes, it tears everything down. Vercel does some of this natively, but the database isolation part requires orchestration that the platform doesn't handle out of the box.

I'm also experimenting with using Claude to generate the deployment configuration by analyzing the codebase. You point it at a repo, it reads the package.json, looks at the file structure, checks for database migrations, and outputs a deploy config. Early results are promising — it correctly identifies the framework, database needs, and required environment variables about 85% of the time. The other 15% is where the human still needs to step in.

That's fine. Deploy agents aren't about removing humans from the loop. They're about removing humans from the boring parts of the loop so we can focus on the interesting parts — like actually building the thing we're deploying.


Shane Larson is the founder of Grizzly Peak Software, building tools and writing code from a cabin in Caswell Lakes, Alaska. With over 30 years in the industry, he's deployed more projects than he can count — and automated the deployment of most of the recent ones. His latest book covers training and fine-tuning large language models.

Powered by Contentful