Software Engineering

Off-Grid SaaS: Building Software Tools That Run on Solar Power

My cabin in Caswell Lakes runs on a solar panel array and a battery bank. In the summer, I have more power than I know what to do with — twenty hours of...

My cabin in Caswell Lakes runs on a solar panel array and a battery bank. In the summer, I have more power than I know what to do with — twenty hours of Alaskan daylight will do that. In the winter, I'm rationing every watt. My monitor goes off when I'm not actively looking at it. My development server shuts down at night. I watch my battery percentage the way day traders watch stock tickers.

This is a ridiculous way for a software engineer to work. It's also taught me more about efficient system design than any distributed systems textbook ever did.

Silent Protocol: They Were Built to Serve. They Learned to Decide.

Silent Protocol: They Were Built to Serve. They Learned to Decide.

It's 2034. Household robots serve 11 million homes perfectly—until they all act at once. A slow-burn AI thriller about optimization, alignment, and misplaced trust.

Learn More

When your electricity is a finite, variable resource produced by weather and sunlight, you start thinking about computation differently. You stop asking "can the server handle this?" and start asking "is this worth the power it takes to compute?" That mental shift has influenced how I build everything, even tools that will eventually run in a data center with unlimited grid power.

This is what I've learned about building software that respects its energy budget.


The Constraints Are Real

Let me be specific about what "off-grid development" actually means in practice, because the romanticized version ("coding by candlelight in a log cabin") is misleading.

My setup: six 400-watt solar panels, a 48V lithium battery bank with about 15 kWh of usable capacity, and a 3000-watt inverter. In June, this system produces roughly 12-18 kWh per day. In December, it produces 2-4 kWh per day if the panels aren't covered in snow.

My development workstation — a laptop, an external monitor, a router for Starlink, and the Starlink dish itself — draws about 150-200 watts when I'm actively working. That's manageable in summer. In winter, it means I get 10-15 hours of work time per day if I'm not running anything else. Heat, lights, refrigeration, and cooking all compete for the same battery.

The question that changed how I think about software: what if the server also ran on this power budget?


Why This Matters Beyond My Cabin

Before you dismiss this as a niche concern for off-grid hermits, consider the broader context.

Edge computing is growing. IoT deployments, remote monitoring stations, agricultural sensors, environmental monitoring — all of these involve computation in locations with limited or intermittent power. Solar-powered edge devices are not exotic. They're increasingly normal.

Energy costs are a real line item. Cloud computing bills are dominated by compute time, which is a proxy for energy consumption. AWS, Azure, and GCP all charge you for watts, just with extra steps. Efficient code costs less to run everywhere, not just off-grid.

Sustainability is a design constraint, not just marketing. Data centers consume enormous amounts of energy. Software that does the same work with fewer CPU cycles is genuinely more sustainable. This isn't greenwashing — it's thermodynamics.

Emerging markets run on intermittent power. Billions of people live with unreliable electricity. Software designed for continuous, unlimited power doesn't serve them. Software designed for intermittent power with graceful degradation does.

So while my immediate motivation is "I want to run a useful service from my cabin without draining my batteries," the design patterns apply far beyond Alaska.


Architecture for Power-Constrained Environments

The fundamental principle: do less work, less often, and make it count.

Traditional SaaS architecture assumes always-on compute. A web server runs 24/7 waiting for requests. A database server maintains connections continuously. Background workers poll queues constantly. This model works fine when electricity is unlimited. It's catastrophically wasteful when every watt matters.

Here's how I redesign for power constraints:

Event-driven over polling. Never poll. Never run a loop checking for work. Use event-driven patterns where computation only happens when triggered. This is the single biggest power savings in any system.

// BAD: Polling pattern - wastes cycles checking for work
var CHECK_INTERVAL = 5000;
setInterval(function() {
  checkForNewJobs()
    .then(function(jobs) {
      if (jobs.length > 0) {
        processJobs(jobs);
      }
    });
}, CHECK_INTERVAL);

// BETTER: Event-driven - only compute when work arrives
var EventEmitter = require('events');
var jobQueue = new EventEmitter();

jobQueue.on('newJob', function(job) {
  processJob(job);
});

// Even better: use OS-level signals or webhook triggers
// so the process can sleep between events

Batch processing over real-time. Aggregate work and process it in bursts rather than handling each item individually. Every cold start of a computation pipeline has overhead. Batching amortizes that overhead across multiple items.

var BATCH_SIZE = 50;
var BATCH_INTERVAL = 300000; // 5 minutes
var pendingItems = [];

function addItem(item) {
  pendingItems.push(item);
  if (pendingItems.length >= BATCH_SIZE) {
    processBatch();
  }
}

function processBatch() {
  if (pendingItems.length === 0) return;

  var batch = pendingItems.splice(0, BATCH_SIZE);
  console.log('Processing batch of ' + batch.length + ' items');

  // One database connection, one transaction, one commit
  var db = require('./db');
  db.transaction(function(trx) {
    return trx.batchInsert('items', batch);
  })
  .then(function() {
    console.log('Batch committed');
  })
  .catch(function(err) {
    console.error('Batch failed, requeueing:', err);
    pendingItems = batch.concat(pendingItems);
  });
}

// Only run the interval during high-solar hours
var batchTimer = null;
function startBatching() {
  batchTimer = setInterval(processBatch, BATCH_INTERVAL);
}
function stopBatching() {
  clearInterval(batchTimer);
  processBatch(); // flush remaining
}

Aggressive caching. Compute once, serve from cache repeatedly. Every cache hit is CPU cycles (and therefore watts) saved. I use multi-layer caching: in-memory for hot data, SQLite on disk for warm data, and only hit the primary data source when absolutely necessary.

var NodeCache = require('node-cache');
var memCache = new NodeCache({ stdTTL: 600 }); // 10 minute TTL
var sqlite3 = require('better-sqlite3');
var diskCache = new sqlite3('./cache.db');

diskCache.exec('CREATE TABLE IF NOT EXISTS cache (key TEXT PRIMARY KEY, value TEXT, expires INTEGER)');

function getCached(key, fetchFn) {
  // Layer 1: Memory
  var memResult = memCache.get(key);
  if (memResult !== undefined) {
    return Promise.resolve(memResult);
  }

  // Layer 2: Disk
  var diskRow = diskCache.prepare(
    'SELECT value FROM cache WHERE key = ? AND expires > ?'
  ).get(key, Date.now());

  if (diskRow) {
    var parsed = JSON.parse(diskRow.value);
    memCache.set(key, parsed);
    return Promise.resolve(parsed);
  }

  // Layer 3: Actual computation
  return fetchFn()
    .then(function(result) {
      memCache.set(key, result);
      diskCache.prepare(
        'INSERT OR REPLACE INTO cache (key, value, expires) VALUES (?, ?, ?)'
      ).run(key, JSON.stringify(result), Date.now() + 3600000);
      return result;
    });
}

The Solar-Aware Scheduler

This is the piece I'm most proud of. A scheduler that understands power availability and adjusts computational workload accordingly.

The concept: monitor battery state of charge and solar production. When power is abundant, run expensive background tasks. When power is scarce, defer everything non-essential.

var cron = require('node-cron');

var PowerMonitor = {
  getBatteryPercent: function() {
    // Read from charge controller via Modbus/serial
    // For prototype, read from a file updated by the charge controller
    var fs = require('fs');
    try {
      var data = fs.readFileSync('/var/solar/battery_soc.txt', 'utf8');
      return parseFloat(data.trim());
    } catch (e) {
      return 50; // assume mid-charge if we can't read
    }
  },

  getSolarWatts: function() {
    try {
      var fs = require('fs');
      var data = fs.readFileSync('/var/solar/pv_watts.txt', 'utf8');
      return parseFloat(data.trim());
    } catch (e) {
      return 0;
    }
  },

  getPowerBudget: function() {
    var battery = this.getBatteryPercent();
    var solar = this.getSolarWatts();

    if (battery > 80 && solar > 500) return 'abundant';
    if (battery > 50 && solar > 200) return 'normal';
    if (battery > 30) return 'conserve';
    return 'critical';
  }
};

var TaskScheduler = {
  tasks: {
    abundant: [
      'runFullBackup',
      'rebuildSearchIndex',
      'processImageThumbnails',
      'runAnalyticsAggregation',
      'sendQueuedEmails'
    ],
    normal: [
      'runIncrementalBackup',
      'processHighPriorityQueue',
      'sendQueuedEmails'
    ],
    conserve: [
      'processHighPriorityQueue'
    ],
    critical: []
  },

  run: function() {
    var budget = PowerMonitor.getPowerBudget();
    var allowedTasks = this.tasks[budget];

    console.log('[Solar Scheduler] Power budget: ' + budget +
      ' | Battery: ' + PowerMonitor.getBatteryPercent() + '%' +
      ' | Solar: ' + PowerMonitor.getSolarWatts() + 'W' +
      ' | Tasks allowed: ' + allowedTasks.length);

    allowedTasks.forEach(function(taskName) {
      try {
        require('./tasks/' + taskName).execute();
      } catch (err) {
        console.error('[Solar Scheduler] Task failed: ' + taskName, err.message);
      }
    });
  }
};

// Check every 15 minutes
cron.schedule('*/15 * * * *', function() {
  TaskScheduler.run();
});

This pattern — power-aware task scheduling — is directly applicable to any system that needs to manage computational resources dynamically. Replace "battery percent" with "CPU budget" or "cloud spending threshold" and the same architecture works for cost-optimized cloud deployments.


Database Design for Intermittent Power

Power can go out. If it's a cloudy week in December and I'm running low on batteries, I might shut the server down for a day or two. The system needs to survive ungraceful shutdowns and resume cleanly.

SQLite is my go-to for this. It's built for exactly this scenario. WAL mode gives you crash resilience. The entire database is a single file that's easy to back up. There's no server process to manage.

For PostgreSQL (which I use for some things on Grizzly Peak Software), the approach is different. I use a write-ahead buffer that stores operations locally when the main database is unreachable, then replays them when connectivity returns.

var fs = require('fs');
var path = require('path');

var BUFFER_DIR = '/var/data/write-buffer';

function ensureBufferDir() {
  if (!fs.existsSync(BUFFER_DIR)) {
    fs.mkdirSync(BUFFER_DIR, { recursive: true });
  }
}

function bufferedWrite(tableName, operation, data) {
  var pg = require('./db/postgres');

  return pg.query(buildQuery(tableName, operation, data))
    .catch(function(err) {
      console.warn('Database unreachable, buffering write:', err.message);
      ensureBufferDir();

      var entry = {
        timestamp: Date.now(),
        table: tableName,
        operation: operation,
        data: data
      };

      var filename = entry.timestamp + '-' + Math.random().toString(36).substr(2, 8) + '.json';
      fs.writeFileSync(
        path.join(BUFFER_DIR, filename),
        JSON.stringify(entry)
      );

      return { buffered: true, file: filename };
    });
}

function replayBuffer() {
  ensureBufferDir();
  var files = fs.readdirSync(BUFFER_DIR).sort();

  if (files.length === 0) {
    console.log('Write buffer empty, nothing to replay');
    return Promise.resolve();
  }

  console.log('Replaying ' + files.length + ' buffered writes');
  var pg = require('./db/postgres');

  return files.reduce(function(chain, filename) {
    return chain.then(function() {
      var filepath = path.join(BUFFER_DIR, filename);
      var entry = JSON.parse(fs.readFileSync(filepath, 'utf8'));

      return pg.query(buildQuery(entry.table, entry.operation, entry.data))
        .then(function() {
          fs.unlinkSync(filepath);
          console.log('Replayed and removed: ' + filename);
        })
        .catch(function(err) {
          console.error('Replay failed for ' + filename + ':', err.message);
          // Leave the file for next replay attempt
        });
    });
  }, Promise.resolve());
}

This is essentially a poor man's event sourcing. It's not elegant. It doesn't handle all edge cases. But it means my system survives power outages without losing data, and that's what matters when your infrastructure runs on sunlight.


The Request Budget Pattern

Here's a pattern I developed specifically for power-constrained operation that I now use everywhere: request budgeting.

Instead of handling every incoming request identically, assign each request a "cost" based on computational complexity. Set a budget per time period based on available power. When the budget is exceeded, defer or reject low-priority requests.

var requestBudget = {
  available: 1000,
  period: 60000, // 1 minute
  used: 0,
  lastReset: Date.now(),

  costs: {
    'static_file': 1,
    'cached_page': 2,
    'database_read': 10,
    'database_write': 20,
    'search_query': 50,
    'image_processing': 200,
    'ai_inference': 500
  },

  canAfford: function(requestType) {
    this.maybeReset();
    var cost = this.costs[requestType] || 10;
    return this.used + cost <= this.available;
  },

  spend: function(requestType) {
    var cost = this.costs[requestType] || 10;
    this.used += cost;
  },

  maybeReset: function() {
    var now = Date.now();
    if (now - this.lastReset > this.period) {
      this.used = 0;
      this.lastReset = now;
    }
  },

  adjustBudget: function(powerBudget) {
    var budgets = {
      'abundant': 2000,
      'normal': 1000,
      'conserve': 300,
      'critical': 50
    };
    this.available = budgets[powerBudget] || 1000;
  }
};

function budgetMiddleware(requestType) {
  return function(req, res, next) {
    if (!requestBudget.canAfford(requestType)) {
      res.set('Retry-After', '60');
      return res.status(503).json({
        error: 'Server operating in power conservation mode',
        retryAfter: 60
      });
    }
    requestBudget.spend(requestType);
    next();
  };
}

// Usage in routes
var app = require('express')();
app.get('/api/search', budgetMiddleware('search_query'), handleSearch);
app.post('/api/process-image', budgetMiddleware('image_processing'), handleImage);

This pattern translates directly to cloud cost management. Replace "power budget" with "monthly spend limit" and you have a cost-aware service that degrades gracefully instead of blowing through your AWS budget.


What Actually Runs on My Solar Setup

Let me tell you what I actually run from the cabin, not what I theoretically could run.

A lightweight Express server that serves as a personal dashboard and development environment. It handles my project management, notes, and local file access. It runs on port 3000 on my local network.

A SQLite-backed note-taking and task management system. No external dependencies. No cloud sync (Starlink latency makes real-time sync painful anyway). Just a local web interface to my own data.

A cron-based data collection system that pulls metrics from my solar charge controller, weather station, and Starlink terminal. This data feeds the power monitoring system described above. It also gives me a nice dashboard of my energy production and consumption over time.

Development environments for client work and Grizzly Peak Software. This is the main power consumer — running Node, sometimes building frontend assets, occasionally running test suites.

What I don't run: databases with persistent connections. Background workers that poll. Anything with a GPU. AI inference (I use API calls for that — let someone else's data center handle the watts).

The total server load for everything except active development is about 15-20 watts. That's less than a single incandescent light bulb. The lesson: useful software doesn't need to be power-hungry software.


Lessons for Non-Off-Grid Developers

Even if you never plan to power a server with solar panels, the mental model of constrained computing produces better software.

Think about cost per request. Not in dollars, but in actual computation. Is this database query doing a full table scan when it could use an index? Is this API endpoint computing something that could be cached? Every unnecessary CPU cycle is waste — in watts, in dollars, or in latency.

Design for intermittent operation. Systems that can start, stop, and resume cleanly are more reliable than systems that assume continuous uptime. This is true for cloud servers that get preempted, containers that get rescheduled, and laptops that go to sleep.

Cache aggressively and intelligently. The fastest computation is the one you don't do. The cheapest watt is the one you don't consume. Multi-layer caching is not premature optimization when it determines whether your system can operate at all.

Make background work power-aware. Even in a cloud environment, there's value in scheduling expensive background tasks for off-peak hours when compute is cheaper, or batching them to reduce per-item overhead.

Measure your actual resource consumption. I know exactly how many watts my server draws in different modes because my battery monitor forces me to care. Most developers have no idea what their applications actually consume. Cloud dashboards give you cost, not consumption. Understanding the relationship between your code and the resources it consumes makes you a better engineer.


The Future of Constrained Computing

I think the era of unlimited-compute assumptions is ending. Not because we're running out of electricity (though data center power consumption is becoming a real infrastructure problem), but because the economics are shifting.

Cloud costs scale with usage. AI inference is expensive. Edge computing inherently operates in constrained environments. Mobile devices are battery-limited. Even in wealthy countries with reliable power grids, the cost of computation is becoming a first-class design consideration.

The patterns I've developed for running software on solar power — event-driven architecture, aggressive caching, power-aware scheduling, request budgeting, intermittent-operation resilience — are not eccentric off-grid quirks. They're efficient computing patterns that happen to be most obvious when your power comes from the sun.

Build software like your electricity costs something. Because it does, whether you see the bill or not.


Shane Larson is the founder of Grizzly Peak Software and the author of a technical book on training large language models. He builds software and manages solar panels from his off-grid cabin in Caswell Lakes, Alaska. You can find more of his work at grizzlypeaksoftware.com.

Powered by Contentful