AI

24-Hour SaaS Challenge: AI-Powered Cabin Maintenance Tracker

At 6 AM on a Saturday, I made the kind of decision that only sounds reasonable when you're on your second pot of coffee and the temperature outside is...

At 6 AM on a Saturday, I made the kind of decision that only sounds reasonable when you're on your second pot of coffee and the temperature outside is negative twelve. I decided to build and deploy a SaaS application in 24 hours.

Not a landing page. Not a mockup. A working, deployed, actually-useful application with a database, user interface, AI-powered predictions, and a payment integration. In one day.

MermAgent:  AI-Powered Diagrams from Plain English

MermAgent: AI-Powered Diagrams from Plain English

Description: Chat with an AI agent to create flowcharts, sequence diagrams, ER diagrams, and more. No syntax to learn. No account required. Try it free at mermagent.com

Try It Free

The idea came from my own pain. I live in a cabin in Caswell Lakes, Alaska. Maintaining a cabin in sub-arctic conditions is a relentless logistics problem. The wood stove needs different maintenance in October than it does in January. The water system needs winterization at precisely the right time — too early and you waste usable weeks, too late and you're dealing with burst pipes at midnight in the dark. The roof needs inspection after every heavy snow load, the generator needs servicing before every season, and the Starlink dish needs clearing after every ice storm.

I've been tracking all of this in a spreadsheet. A bad spreadsheet. The kind with merged cells and color coding that made sense when I created it and makes no sense now.

I wanted something better. Something that knows what season it is, what the weather's doing, and what's due next. And I wanted to build it in a day to prove a point about what's possible with modern AI-assisted development.

Here's how it went.


Hour 0-1: Architecture and Planning (6:00 AM - 7:00 AM)

I started with Claude Code open in one terminal and a blank project directory in the other. Before writing a single line of code, I spent the first hour on architecture decisions.

The stack I chose:

  • Backend: Node.js with Express
  • Database: PostgreSQL (hosted on DigitalOcean)
  • Frontend: Server-rendered HTML with Pug templates and minimal JavaScript
  • AI Integration: OpenAI API for maintenance predictions
  • Weather Data: OpenWeatherMap API (free tier)
  • Deployment: DigitalOcean App Platform
  • Payments: Stripe (for the eventual premium tier)

Why this stack? Because I already know it cold. The 24-hour constraint means zero time for learning. Every minute spent reading documentation for a new framework is a minute not spent building features.

I sketched the data model on paper. Old school, but it works:

Users: id, email, password_hash, location_lat, location_lng, timezone, created_at
Properties: id, user_id, name, type (cabin/house/shop), location_description
Assets: id, property_id, name, category, install_date, last_serviced, notes
MaintenanceTasks: id, asset_id, title, description, interval_days, season, priority
MaintenanceLog: id, task_id, completed_at, notes, cost
WeatherCache: id, location_key, data, fetched_at
Predictions: id, task_id, predicted_date, confidence, reasoning, weather_factors

Seven tables. Simple enough to build fast, complex enough to be useful.


Hour 1-3: Database and Core API (7:00 AM - 9:00 AM)

I asked Claude Code to generate the PostgreSQL schema based on my paper sketch. It produced something usable on the first pass, though I adjusted a few column types and added some indexes it missed.

CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    email VARCHAR(255) UNIQUE NOT NULL,
    password_hash VARCHAR(255) NOT NULL,
    location_lat DECIMAL(10, 7),
    location_lng DECIMAL(10, 7),
    timezone VARCHAR(50) DEFAULT 'America/Anchorage',
    created_at TIMESTAMP DEFAULT NOW()
);

CREATE TABLE properties (
    id SERIAL PRIMARY KEY,
    user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
    name VARCHAR(255) NOT NULL,
    type VARCHAR(50) NOT NULL,
    location_description TEXT,
    created_at TIMESTAMP DEFAULT NOW()
);

CREATE TABLE assets (
    id SERIAL PRIMARY KEY,
    property_id INTEGER REFERENCES properties(id) ON DELETE CASCADE,
    name VARCHAR(255) NOT NULL,
    category VARCHAR(100) NOT NULL,
    install_date DATE,
    last_serviced DATE,
    notes TEXT,
    created_at TIMESTAMP DEFAULT NOW()
);

CREATE TABLE maintenance_tasks (
    id SERIAL PRIMARY KEY,
    asset_id INTEGER REFERENCES assets(id) ON DELETE CASCADE,
    title VARCHAR(255) NOT NULL,
    description TEXT,
    interval_days INTEGER,
    season VARCHAR(20),
    priority VARCHAR(20) DEFAULT 'medium',
    created_at TIMESTAMP DEFAULT NOW()
);

CREATE TABLE maintenance_log (
    id SERIAL PRIMARY KEY,
    task_id INTEGER REFERENCES maintenance_tasks(id) ON DELETE CASCADE,
    completed_at TIMESTAMP DEFAULT NOW(),
    notes TEXT,
    cost DECIMAL(10, 2)
);

With the schema in place, I built the Express API layer. This is where Claude Code really earned its keep. I described the CRUD operations I needed, pointed it at my existing Express projects for style reference, and it generated route handlers that matched my patterns — var declarations, callback-style error handling, the whole thing.

The core API endpoints took about ninety minutes:

var express = require('express');
var router = express.Router();
var db = require('../db/postgres');

// Get all assets for a property with their maintenance status
router.get('/api/properties/:propertyId/assets', function(req, res) {
    var propertyId = req.params.propertyId;
    var userId = req.session.userId;

    var query = [
        'SELECT a.*, ',
        '  (SELECT MAX(ml.completed_at) FROM maintenance_log ml ',
        '   JOIN maintenance_tasks mt ON ml.task_id = mt.id ',
        '   WHERE mt.asset_id = a.id) as last_maintained,',
        '  (SELECT COUNT(*) FROM maintenance_tasks mt ',
        '   WHERE mt.asset_id = a.id) as task_count',
        'FROM assets a ',
        'JOIN properties p ON a.property_id = p.id ',
        'WHERE a.property_id = $1 AND p.user_id = $2 ',
        'ORDER BY a.category, a.name'
    ].join('\n');

    db.query(query, [propertyId, userId], function(err, result) {
        if (err) {
            console.error('Error fetching assets:', err);
            return res.status(500).json({ error: 'Failed to fetch assets' });
        }
        res.json(result.rows);
    });
});

module.exports = router;

By 9 AM, I had user registration, login, property management, asset tracking, and maintenance task CRUD all working. Not pretty, but functional.


Hour 3-6: The AI Prediction Engine (9:00 AM - 12:00 PM)

This is the feature that makes the app more than a glorified spreadsheet. The prediction engine takes your maintenance history, the current season, and real-time weather data, then generates prioritized recommendations for what needs attention.

The weather integration came first. I wrapped the OpenWeatherMap API in a simple caching layer:

var https = require('https');
var db = require('../db/postgres');

var WEATHER_API_KEY = process.env.OPENWEATHER_API_KEY;
var CACHE_HOURS = 6;

function getWeather(lat, lng, callback) {
    // Check cache first
    var cacheKey = lat.toFixed(2) + ',' + lng.toFixed(2);

    db.query(
        'SELECT data, fetched_at FROM weather_cache WHERE location_key = $1 AND fetched_at > NOW() - INTERVAL \'6 hours\'',
        [cacheKey],
        function(err, result) {
            if (err) return callback(err);

            if (result.rows.length > 0) {
                return callback(null, JSON.parse(result.rows[0].data));
            }

            // Fetch fresh data
            var url = 'https://api.openweathermap.org/data/3.0/onecall?lat=' + lat +
                '&lon=' + lng + '&appid=' + WEATHER_API_KEY + '&units=imperial';

            https.get(url, function(res) {
                var body = '';
                res.on('data', function(chunk) { body += chunk; });
                res.on('end', function() {
                    var data = JSON.parse(body);

                    // Cache it
                    db.query(
                        'INSERT INTO weather_cache (location_key, data, fetched_at) VALUES ($1, $2, NOW()) ' +
                        'ON CONFLICT (location_key) DO UPDATE SET data = $2, fetched_at = NOW()',
                        [cacheKey, body]
                    );

                    callback(null, data);
                });
            }).on('error', callback);
        }
    );
}

module.exports = { getWeather: getWeather };

Then came the interesting part — the AI prediction logic. I built a prompt that takes the user's maintenance history, asset information, current weather, and seasonal context, then asks the model to predict upcoming maintenance needs.

The key insight was this: the AI doesn't need to be a maintenance expert. It needs to be a pattern matcher. When I tell it "the wood stove was last cleaned 85 days ago, it's mid-January, the temperature has been below -10 for a week, and the stove is used 16 hours per day in these conditions," it can reasonably predict that creosote buildup is accelerating and chimney cleaning should be prioritized.

var OpenAI = require('openai');

var openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

function generatePredictions(assets, tasks, logs, weather, callback) {
    var currentMonth = new Date().getMonth() + 1;
    var season = getSeason(currentMonth);

    var prompt = buildPredictionPrompt(assets, tasks, logs, weather, season);

    openai.chat.completions.create({
        model: 'gpt-4o-mini',
        messages: [
            {
                role: 'system',
                content: 'You are a property maintenance advisor. Analyze the maintenance history, current weather conditions, and seasonal factors to predict upcoming maintenance needs. Return a JSON array of predictions with fields: task_id, urgency (1-10), predicted_date (ISO string), reasoning (one sentence), weather_risk (boolean).'
            },
            { role: 'user', content: prompt }
        ],
        response_format: { type: 'json_object' },
        temperature: 0.3
    }).then(function(response) {
        var predictions = JSON.parse(response.choices[0].message.content);
        callback(null, predictions);
    }).catch(function(err) {
        callback(err);
    });
}

function getSeason(month) {
    if (month >= 11 || month <= 3) return 'winter';
    if (month >= 4 && month <= 5) return 'spring';
    if (month >= 6 && month <= 8) return 'summer';
    return 'fall';
}

function buildPredictionPrompt(assets, tasks, logs, weather, season) {
    var lines = [];
    lines.push('Current season: ' + season);
    lines.push('Current temperature: ' + weather.current.temp + 'F');
    lines.push('Weather conditions: ' + weather.current.weather[0].description);
    lines.push('Wind speed: ' + weather.current.wind_speed + ' mph');

    if (weather.alerts && weather.alerts.length > 0) {
        lines.push('ACTIVE WEATHER ALERTS:');
        weather.alerts.forEach(function(alert) {
            lines.push('  - ' + alert.event + ': ' + alert.description.substring(0, 200));
        });
    }

    lines.push('\nAssets and maintenance history:');
    assets.forEach(function(asset) {
        lines.push('\n' + asset.name + ' (' + asset.category + ')');
        lines.push('  Installed: ' + (asset.install_date || 'unknown'));

        var assetTasks = tasks.filter(function(t) { return t.asset_id === asset.id; });
        assetTasks.forEach(function(task) {
            var taskLogs = logs.filter(function(l) { return l.task_id === task.id; });
            var lastDone = taskLogs.length > 0 ? taskLogs[0].completed_at : 'never';
            lines.push('  Task: ' + task.title + ' (every ' + task.interval_days + ' days, priority: ' + task.priority + ')');
            lines.push('    Last completed: ' + lastDone);
        });
    });

    return lines.join('\n');
}

module.exports = { generatePredictions: generatePredictions };

I chose GPT-4o-mini over a larger model deliberately. The predictions don't need frontier-level reasoning — they need fast, cheap, structured output. At roughly $0.15 per 1M input tokens, I can run predictions for a user's entire property for less than a penny.

By noon, the prediction engine was working. I tested it with my own cabin data, and the results were surprisingly sensible. It flagged my generator for an oil change (overdue by two weeks), recommended checking the roof for ice dams (we'd had heavy snow), and suggested I inspect the water heater anode rod (installed 18 months ago, typical replacement interval is 12-18 months).


Hour 6-9: The User Interface (12:00 PM - 3:00 PM)

I am not a frontend designer. I have accepted this about myself. So the UI strategy was simple: Bootstrap 5, server-rendered Pug templates, and zero client-side framework overhead.

The dashboard view was the most important screen. It needed to show:

  1. A weather summary for the property location
  2. AI-generated predictions ranked by urgency
  3. Upcoming scheduled maintenance
  4. A quick-log button for completed tasks

I won't include all the Pug template code — it's not particularly interesting — but the design principle was: every piece of information visible without scrolling on a laptop screen. Cabin owners checking their maintenance dashboard at 6 AM with coffee don't want to scroll through pages of UI.

The color coding was weather-driven. Tasks affected by current weather conditions got a blue snowflake or orange sun icon. Overdue tasks got a red border. Everything else was neutral.

Claude Code generated the initial templates in about thirty minutes. I spent another two hours adjusting layouts, fixing mobile responsiveness, and adding the interactive elements. The "mark as complete" button used a simple fetch call:

function markComplete(taskId) {
    var notesInput = document.getElementById('notes-' + taskId);
    var costInput = document.getElementById('cost-' + taskId);

    fetch('/api/maintenance/complete', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({
            task_id: taskId,
            notes: notesInput ? notesInput.value : '',
            cost: costInput ? parseFloat(costInput.value) || 0 : 0
        })
    }).then(function(response) {
        if (response.ok) {
            location.reload();
        }
    });
}

Simple. No React. No Vue. No build step. Just a function that posts data and reloads the page. It's not elegant, but it works, and it took five minutes instead of five hours.


Hour 9-12: Weather Alerts and Notification Logic (3:00 PM - 6:00 PM)

The next feature was proactive alerts. The app checks weather forecasts and cross-references them with your maintenance schedule to warn you about weather-sensitive tasks.

For example: if a hard freeze is forecast in the next 48 hours and your water system winterization task is still marked as incomplete, that's a critical alert. If heavy snow is forecast and your roof hasn't been inspected in 60 days, that's a warning.

The logic was a series of rules, not AI. This is an important architectural decision — don't use AI for things that are deterministic. If the temperature is going below 32°F and the pipes aren't winterized, that's not a prediction. That's arithmetic.

function evaluateWeatherAlerts(tasks, logs, forecast) {
    var alerts = [];

    // Check for freeze risk
    var freezeRisk = forecast.daily.some(function(day) {
        return day.temp.min < 32;
    });

    if (freezeRisk) {
        var winterTasks = tasks.filter(function(t) {
            return t.season === 'winter' || t.title.toLowerCase().indexOf('winteriz') >= 0;
        });

        winterTasks.forEach(function(task) {
            var lastLog = getLastLog(task.id, logs);
            var currentYear = new Date().getFullYear();
            var completedThisYear = lastLog &&
                new Date(lastLog.completed_at).getFullYear() === currentYear;

            if (!completedThisYear) {
                alerts.push({
                    task_id: task.id,
                    type: 'freeze_warning',
                    severity: 'critical',
                    message: task.title + ' should be completed before freeze. Low of ' +
                        Math.round(getLowestTemp(forecast)) + '°F forecast.'
                });
            }
        });
    }

    // Check for heavy snow/wind
    var stormRisk = forecast.daily.some(function(day) {
        return day.wind_speed > 30 ||
            (day.snow && day.snow > 6);
    });

    if (stormRisk) {
        var structuralTasks = tasks.filter(function(t) {
            return t.category === 'structural' || t.category === 'roof';
        });

        structuralTasks.forEach(function(task) {
            var lastLog = getLastLog(task.id, logs);
            var daysSinceService = lastLog ?
                Math.floor((Date.now() - new Date(lastLog.completed_at)) / 86400000) :
                999;

            if (daysSinceService > 30) {
                alerts.push({
                    task_id: task.id,
                    type: 'storm_warning',
                    severity: 'warning',
                    message: 'Storm forecast. ' + task.title + ' last completed ' +
                        daysSinceService + ' days ago.'
                });
            }
        });
    }

    return alerts;
}

I saved the AI for the fuzzy stuff — predicting when a maintenance task might need to be moved up based on usage patterns and weather history. The deterministic stuff runs on plain logic.


Hour 12-16: Authentication, Stripe, and Polish (6:00 PM - 10:00 PM)

Authentication took longer than it should have. I used bcrypt for password hashing, express-session for session management, and wrote the login/register flow by hand. Could I have used Passport.js? Sure. But Passport's abstraction layer always costs me more time in debugging than it saves in setup.

Stripe integration was for a future premium tier — free users get one property with up to ten assets, premium users get unlimited. I set up the Stripe checkout flow but didn't wire it to actual feature gates. That's a problem for day two.

The polish phase was important. I added:

  • Input validation on all forms (server-side, not just client-side)
  • Error pages that actually tell you what went wrong
  • A seed script that populates a demo account with realistic cabin data
  • Basic rate limiting on the API endpoints
  • CSRF protection on form submissions

These aren't features. They're the difference between a toy and a tool.


Hour 16-20: Testing and Deployment (10:00 PM - 2:00 AM)

I wrote exactly zero automated tests. I know, I know. But in a 24-hour challenge, manual testing wins on time-to-feedback ratio. I clicked through every flow, tried to break every form, and threw bad data at every endpoint.

I found and fixed eleven bugs in this phase. The worst one: the prediction engine crashed when a user had no maintenance history, because I was trying to access the first element of an empty array. Classic.

Deployment was straightforward because I chose a stack and platform I already know. DigitalOcean App Platform detected the Node.js app, connected to the managed PostgreSQL database, and deployed on the first push. SSL was automatic. DNS was a CNAME record.

The deployment command:

git push origin main

That's it. No Docker. No Kubernetes. No CI/CD pipeline. Push to main, App Platform builds and deploys. For a solo project, this is exactly the right amount of infrastructure.


Hour 20-24: Documentation, Demo Data, and Retrospective (2:00 AM - 6:00 AM)

The last four hours were the hardest, because I was running on caffeine and stubbornness. I wrote a landing page, created demo account credentials for anyone who wanted to try it, and recorded a short walkthrough video.

Then I sat back and looked at what I'd built.

What worked:

  • The weather-integrated prediction engine is genuinely useful. It told me things about my own cabin maintenance I hadn't thought of.
  • Server-rendered pages are fast. No loading spinners, no hydration delay. Click a link, see the page.
  • Choosing a familiar stack saved hours. Every unfamiliar tool in a time-constrained build is a risk multiplier.

What didn't work:

  • The UI is ugly. Functional, but ugly. Bootstrap gets you 80% of the way to "acceptable" and the last 20% takes real design skill I don't have.
  • I skipped email notifications entirely. The alert system generates warnings but only shows them on the dashboard. Real-world usefulness requires push notifications.
  • Multi-property support is half-baked. The data model supports it, but the UI assumes one property.

What I'd do differently:

  • Start with the UI, not the backend. In a time-constrained build, the thing users see matters more than the thing they don't.
  • Use a hosted auth service like Clerk or Auth0. Writing auth by hand took 90 minutes that could have gone toward features.
  • Build the notification system first. A maintenance tracker that doesn't remind you to do maintenance is just a diary.

The Bigger Lesson: What 24 Hours Actually Proves

Here's what I think this exercise demonstrates, and it's not "AI makes everything easy."

AI made the coding fast. The schema generation, the boilerplate routes, the API integration code — Claude Code produced all of this at a pace I couldn't match alone. Conservatively, AI saved me 8-10 hours of raw development time.

But AI didn't make the decisions fast. Choosing the right stack, designing the data model, deciding which features to build and which to skip, knowing when the prediction engine needed AI and when it needed deterministic rules — that was all human judgment, accumulated over thirty years.

The real lesson is that experienced developers can now build in 24 hours what used to take a week or two. Not because AI replaces their skills, but because it eliminates the mechanical friction between having a design in your head and having code on a server.

If you're thinking about trying a 24-hour build challenge, here's my advice:

  1. Pick a problem you personally have. You'll make better design decisions when you're the user.
  2. Use a stack you already know. Learning and building simultaneously is a recipe for a 24-hour failure.
  3. Cut scope ruthlessly. My original plan included a mobile app. That lasted about ten minutes into the planning phase before reality intervened.
  4. Deploy early. I had the app running on App Platform by hour 14. Everything after that was improvements to a live system, not theoretical code on my laptop.
  5. Don't skip error handling. The difference between a demo and a product is what happens when things go wrong.

The cabin maintenance tracker is now something I actually use. Every morning, I check the dashboard with my coffee, see what the weather's doing to my maintenance schedule, and plan accordingly. It's saved me from at least one frozen pipe situation already.

Could a non-developer build this with AI alone? Maybe a version of it. But it wouldn't have the caching layer that prevents API rate limit issues. It wouldn't have the server-side validation that prevents data corruption. It wouldn't have the deterministic rule engine that runs without burning AI credits on obvious decisions.

AI is the best power tool in the workshop. But you still need to know how to build.


Shane Larson is a software engineer with over 30 years of experience, currently building and maintaining things — both software and physical — from a cabin in Alaska. He writes about practical AI, software architecture, and the reality of tech life at grizzlypeaksoftware.com.

Powered by Contentful