Code Coverage Analysis: Metrics That Matter
A practical guide to code coverage analysis in Node.js, comparing nyc, c8, and Jest coverage tools, with CI enforcement, threshold strategies, and avoiding common coverage pitfalls.
Code Coverage Analysis: Metrics That Matter
Code coverage tells you what your tests execute, not what they verify. That distinction is the single most important thing to understand before you instrument a single line. Coverage is a necessary diagnostic tool for finding dead zones in your test suite, but it is a terrible proxy for quality when treated as a goal unto itself.
This article walks through the coverage tools available in the Node.js ecosystem, the metrics that actually surface risk, and how to wire coverage enforcement into CI without creating perverse incentives that lead to worse code.
Prerequisites
- Node.js v18+ installed locally
- Working knowledge of Express.js and npm
- Familiarity with a test runner (Mocha, Jest, or Node's built-in test runner)
- Basic understanding of CI pipelines (GitHub Actions examples included)
Install the tools we will use throughout:
npm install --save-dev nyc mocha chai supertest c8
Coverage Metrics Explained
There are four primary coverage metrics. Every coverage tool reports them, but most engineers only glance at one. That is a mistake.
Line Coverage
Line coverage measures the percentage of executable lines that were run during the test suite. It is the most commonly cited metric and the least useful on its own.
// lines.js
function processOrder(order) {
var total = 0; // line 3
for (var i = 0; i < order.items.length; i++) { // line 4
total += order.items[i].price; // line 5
} // line 6
if (order.coupon) { // line 7
total = total * 0.9; // line 8
} // line 9
return total; // line 10
}
If your test only passes an order without a coupon, you get 87.5% line coverage (7 of 8 executable lines). Line 8 is never executed. That might be acceptable, or it might be hiding a bug where the coupon discount is applied incorrectly.
Statement Coverage
Statement coverage is similar to line coverage but counts individual statements rather than lines. A single line can contain multiple statements:
var x = 1; var y = 2; var z = x + y;
That is one line but three statements. In practice, statement and line coverage diverge only when you write compressed or minified-style code. For typical Node.js projects, they track closely.
Function Coverage
Function coverage measures whether each declared function was invoked at least once. It is a coarser metric. A function with 50 lines of branching logic shows 100% function coverage if called once with any input.
// 100% function coverage, 40% branch coverage
function calculateShipping(weight, country, expedited) {
if (country === "US") {
if (expedited) {
return weight * 5.99;
}
return weight * 2.99;
}
if (country === "CA") {
return weight * 4.50;
}
return weight * 8.99;
}
// This single test gives 100% function coverage
calculateShipping(10, "US", false);
Branch Coverage
Branch coverage tracks whether every possible path through control flow was taken. This is the metric that matters most. Every if, else, ternary, switch case, ||, &&, and ?? operator creates branches.
function authenticate(user, token) {
if (!user) { // branch 1: true/false
return { error: "No user" };
}
if (!token || token.expired) { // branch 2: !token true/false
// branch 3: token.expired true/false
return { error: "Invalid token" };
}
if (user.role === "admin") { // branch 4: true/false
return { access: "full", user: user };
}
return { access: "limited", user: user };
}
That function has at least 4 branch points with 8 possible paths. A test that only passes a valid non-admin user with a good token hits exactly one path. Branch coverage exposes this immediately. Line coverage might show 60-70% and look acceptable.
Tools Comparison: nyc vs c8 vs Jest
nyc (Istanbul)
nyc is the CLI wrapper around Istanbul, the most established coverage tool in the Node.js ecosystem. It works by instrumenting your source code at load time, inserting counters into every statement, branch, and function.
Pros:
- Mature, battle-tested, widely documented
- Works with any test runner (Mocha, Tape, AVA, raw Node)
- Extensive configuration options
- Handles require hooks and transpilation pipelines
Cons:
- Instrumentation overhead: 15-40% slower test execution
- Source maps can break with complex transpilation chains
- Older architecture, less active development
# Basic usage with Mocha
npx nyc mocha --recursive test/
# With reporters
npx nyc --reporter=text --reporter=html mocha test/
c8 (V8 Native Coverage)
c8 uses V8's built-in code coverage, which was added in Node.js 10. Instead of rewriting your source code, it asks the V8 engine to track execution directly. This is fundamentally more accurate and faster.
Pros:
- No instrumentation overhead (near-zero performance impact)
- Accurate coverage for native ESM, dynamic imports, child processes
- Handles code that nyc misreports (eval, new Function, vm module)
- Simpler architecture
Cons:
- Requires Node.js 10.12+ (not a real constraint in 2026)
- Fewer configuration options than nyc
- HTML reports are less polished than Istanbul's
# Basic usage
npx c8 mocha --recursive test/
# With thresholds
npx c8 --check-coverage --lines 80 --branches 75 node --test
Jest Built-in Coverage
Jest uses Istanbul under the hood, but wraps it in its own collection mechanism. You enable it with a flag.
npx jest --coverage
Jest's coverage integration is convenient but has quirks. It only reports coverage for files that are imported during tests by default. Files that are never required show 0% but are not listed in the report. You need the collectCoverageFrom option to include untested files:
// jest.config.js
module.exports = {
collectCoverage: true,
collectCoverageFrom: [
"src/**/*.js",
"!src/**/*.test.js",
"!src/**/index.js"
],
coverageThresholds: {
global: {
branches: 75,
functions: 80,
lines: 80,
statements: 80
}
}
};
My Recommendation
Use c8 for new projects. The V8-native approach is more accurate, faster, and requires less configuration. Use nyc if you are on an established project that already has Istanbul configuration or if you need features like per-file thresholds with complex glob patterns. Use Jest's built-in coverage if you are already using Jest and do not want another dependency.
Configuring nyc for Express.js Projects
Here is a production-grade nyc configuration for an Express.js project. Put this in your package.json or a .nycrc.json file.
{
"nyc": {
"all": true,
"include": [
"src/**/*.js",
"routes/**/*.js",
"models/**/*.js",
"middleware/**/*.js",
"utils/**/*.js"
],
"exclude": [
"test/**",
"coverage/**",
"node_modules/**",
"**/*.test.js",
"**/*.spec.js",
"migrations/**",
"seeds/**",
"scripts/**"
],
"reporter": [
"text",
"text-summary",
"html",
"lcov"
],
"report-dir": "./coverage",
"temp-dir": "./.nyc_output",
"check-coverage": true,
"branches": 75,
"lines": 80,
"functions": 80,
"statements": 80,
"watermarks": {
"lines": [70, 85],
"functions": [70, 85],
"branches": [65, 80],
"statements": [70, 85]
},
"per-file": false,
"skip-full": false,
"clean": true
}
}
The all: true flag is critical. Without it, nyc only reports coverage for files that are require()-ed during tests. Your untested utility files would be invisible. With all: true, every file matching your include patterns appears in the report, even with 0% coverage.
The watermarks control the color coding in reports. Lines below the low watermark show red. Between the watermarks shows yellow. Above the high watermark shows green.
Wire it into your npm scripts:
{
"scripts": {
"test": "mocha --recursive test/",
"test:coverage": "nyc npm test",
"test:coverage:check": "nyc check-coverage",
"test:ci": "nyc --reporter=lcov --reporter=text-summary npm test"
}
}
c8 with Native V8 Coverage
c8 configuration is lighter. Create a .c8rc.json:
{
"all": true,
"include": [
"src/**/*.js",
"routes/**/*.js",
"models/**/*.js"
],
"exclude": [
"test/**",
"node_modules/**"
],
"reporter": ["text", "html", "lcov"],
"report-dir": "./coverage",
"check-coverage": true,
"lines": 80,
"branches": 75,
"functions": 80,
"statements": 80,
"clean": true,
"skip-full": false
}
c8 works especially well with Node.js's built-in test runner:
# Node.js built-in test runner with c8
npx c8 node --test test/**/*.test.js
One underappreciated advantage of c8: it correctly covers code inside eval(), new Function(), and the vm module. nyc cannot instrument those because the code is not loaded through the normal require pipeline. If you use template engines or dynamic code generation, c8 gives you real numbers.
// c8 covers this correctly; nyc does not
var vm = require("vm");
function runSandboxed(code, context) {
var script = new vm.Script(code);
var ctx = vm.createContext(context);
return script.runInContext(ctx);
}
Coverage Thresholds and CI Enforcement
Setting thresholds without enforcement is wishful thinking. Here is how to make coverage gates real in a GitHub Actions pipeline:
# .github/workflows/test.yml
name: Tests
on:
push:
branches: [main, master]
pull_request:
branches: [main, master]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- name: Run tests with coverage
run: npx c8 --check-coverage --lines 80 --branches 75 --functions 80 npm test
- name: Upload coverage to Codecov
if: always()
uses: codecov/codecov-action@v4
with:
file: ./coverage/lcov.info
fail_ci_if_error: false
- name: Archive coverage report
if: always()
uses: actions/upload-artifact@v4
with:
name: coverage-report
path: coverage/
retention-days: 14
The --check-coverage flag causes the process to exit with code 1 if any threshold is not met. The CI job fails and the pull request cannot merge.
Threshold Strategy
Do not start at 80% and demand it immediately. That leads to garbage tests written to satisfy a number. Here is a pragmatic ramp-up:
{
"check-coverage": true,
"branches": 50,
"lines": 60,
"functions": 60,
"statements": 60
}
Start with thresholds that reflect your current coverage. Increase by 5% per quarter. The goal is a ratchet: coverage can go up but never down. Some teams automate this with a script that reads the current coverage from coverage-summary.json and updates the config.
// scripts/update-coverage-thresholds.js
var fs = require("fs");
var path = require("path");
var summaryPath = path.join(__dirname, "..", "coverage", "coverage-summary.json");
var configPath = path.join(__dirname, "..", ".c8rc.json");
var summary = JSON.parse(fs.readFileSync(summaryPath, "utf8"));
var config = JSON.parse(fs.readFileSync(configPath, "utf8"));
var total = summary.total;
// Floor to nearest 5, then subtract 5 for breathing room
function floorTo5(n) {
return Math.floor(n / 5) * 5 - 5;
}
config.lines = Math.max(config.lines, floorTo5(total.lines.pct));
config.branches = Math.max(config.branches, floorTo5(total.branches.pct));
config.functions = Math.max(config.functions, floorTo5(total.functions.pct));
config.statements = Math.max(config.statements, floorTo5(total.statements.pct));
fs.writeFileSync(configPath, JSON.stringify(config, null, 2) + "\n");
console.log("Updated thresholds:");
console.log(" Lines: " + config.lines + "% (actual: " + total.lines.pct + "%)");
console.log(" Branches: " + config.branches + "% (actual: " + total.branches.pct + "%)");
console.log(" Functions: " + config.functions + "% (actual: " + total.functions.pct + "%)");
console.log(" Statements: " + config.statements + "% (actual: " + total.statements.pct + "%)");
Interpreting Coverage Reports
Text Report
The text reporter gives you a quick summary in the terminal:
--------------------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
--------------------|---------|----------|---------|---------|-------------------
All files | 82.14 | 68.75 | 85.71 | 81.48 |
src/ | 83.33 | 71.43 | 88.89 | 82.76 |
app.js | 78.57 | 60.00 | 80.00 | 77.78 | 34,45-48,67
server.js | 100.00 | 100.00 | 100.00 | 100.00 |
routes/ | 81.25 | 66.67 | 83.33 | 80.00 |
users.js | 85.71 | 75.00 | 100.00 | 84.62 | 23,41
orders.js | 76.92 | 57.14 | 66.67 | 75.00 | 18-22,35,48-52
middleware/ | 80.00 | 66.67 | 100.00 | 80.00 |
auth.js | 80.00 | 66.67 | 100.00 | 80.00 | 15,28
--------------------|---------|----------|---------|---------|-------------------
The "Uncovered Line #s" column is where the actionable information lives. Look at orders.js: lines 18-22 and 48-52 are uncovered ranges. Those are likely error handling paths or edge cases that need tests.
HTML Report
The HTML report is the most useful for investigation. It highlights individual source files with red (uncovered), yellow (partially covered), and green (covered) lines.
# Generate and open HTML report
npx c8 --reporter=html npm test
open coverage/index.html # macOS
start coverage/index.html # Windows
The HTML report also shows branch markers. An I marker means the if path was taken but not the else. An E marker means the opposite. A number like 2x next to a line tells you how many times that line executed.
LCOV Report
LCOV is the standard format for coverage services like Codecov, Coveralls, and SonarQube. The lcov.info file is a plain-text format:
TN:
SF:/app/src/routes/users.js
FN:5,getUsers
FN:15,getUserById
FN:32,createUser
FNDA:4,getUsers
FNDA:3,getUserById
FNDA:2,createUser
FNF:3
FNH:3
DA:5,4
DA:6,4
DA:7,4
DA:15,3
DA:16,3
DA:23,0
DA:24,0
LF:20
LH:16
BRF:8
BRH:5
end_of_record
DA:23,0 means line 23 was hit 0 times. BRF:8 means 8 branches found, BRH:5 means 5 branches hit. You rarely read this directly, but understanding the format helps when debugging CI integration issues.
Branch Coverage Deep Dive
Branch coverage is the metric I care about most, and I will explain why with a concrete example.
// middleware/rateLimit.js
var rateLimits = {};
function rateLimit(options) {
var windowMs = options.windowMs || 60000;
var max = options.max || 100;
var keyGenerator = options.keyGenerator || function(req) { return req.ip; };
var handler = options.handler || null;
return function(req, res, next) {
var key = keyGenerator(req);
var now = Date.now();
if (!rateLimits[key]) {
rateLimits[key] = { count: 1, resetTime: now + windowMs };
return next();
}
if (now > rateLimits[key].resetTime) {
rateLimits[key] = { count: 1, resetTime: now + windowMs };
return next();
}
rateLimits[key].count++;
if (rateLimits[key].count > max) {
if (handler) {
return handler(req, res, next);
}
res.status(429).json({ error: "Too many requests" });
return;
}
next();
};
}
module.exports = rateLimit;
A naive test that sends one request with one IP gives you 100% function coverage and roughly 65% line coverage. But branch coverage reveals the gaps:
- What happens when the window expires? (line 19 branch)
- What happens when
count > max? (line 25 branch) - What happens when a custom handler is provided? (line 26 branch)
- What happens with the default
keyGenerator? (line 6 fallback) - What happens when
handleris null vs provided? (line 26-27 branch)
These are exactly the paths where bugs hide. The rate limit might reset incorrectly. The custom handler might not receive the right arguments. The default response might not set the right status code. Branch coverage forces you to think about these scenarios.
Logical Assignment and Short-Circuit Branches
Modern JavaScript creates hidden branches that Istanbul and c8 track:
function getConfig(options) {
var port = options.port || 3000; // branch: options.port truthy/falsy
var host = options.host || "localhost"; // branch: options.host truthy/falsy
var debug = options.debug && options.verbose; // branch: debug truthy, then verbose truthy/falsy
var name = options.name ?? "app"; // branch: options.name null/undefined vs defined
return { port: port, host: host, debug: debug, name: name };
}
Every ||, &&, ??, and ternary operator is a branch. A function with 5 such operators has at least 10 branch paths. Coverage tools report these individually, which is why branch coverage is almost always lower than line coverage.
Excluding Files and Patterns from Coverage
Not every file needs coverage. Test utilities, configuration files, migration scripts, and seed data are legitimate exclusions.
File-Level Exclusion
In your config:
{
"exclude": [
"test/**",
"coverage/**",
"migrations/**",
"seeds/**",
"scripts/**",
"**/*.config.js",
"**/index.js"
]
}
Inline Exclusion with Istanbul Comments
Sometimes you need to exclude specific lines or blocks within a file:
// Exclude a single line
var env = process.env.NODE_ENV; /* istanbul ignore next */
// Exclude an entire if block
/* istanbul ignore if */
if (process.env.NODE_ENV === "development") {
app.use(require("morgan")("dev"));
}
// Exclude an else branch
if (config.database) {
connectToDatabase(config.database);
} else {
/* istanbul ignore next */
console.warn("No database configured, using in-memory store");
useInMemoryStore();
}
// c8 uses the same comments, or the v8 variant
/* c8 ignore start */
function debugHelper() {
// This entire function is excluded from coverage
console.log("Debug mode active");
}
/* c8 ignore stop */
/* c8 ignore next */
var unusedExport = module.exports.unusedExport = function() {};
Use exclusion comments sparingly. Every exclusion is technical debt. If you find yourself excluding more than 5% of a file, the file either needs refactoring or should be excluded at the config level.
Coverage for Async Code and Error Handlers
Async code and error handlers are the hardest parts of a Node.js application to cover. They are also where bugs are most consequential.
Testing Error Handlers
Express error handlers have a specific four-argument signature. You need to trigger errors to cover them:
// middleware/errorHandler.js
function errorHandler(err, req, res, next) {
var status = err.status || 500;
var message = err.message || "Internal Server Error";
if (err.code === "VALIDATION_ERROR") {
return res.status(400).json({
error: "Validation failed",
details: err.details
});
}
if (err.code === "NOT_FOUND") {
return res.status(404).json({
error: "Resource not found"
});
}
// Log unexpected errors
if (status === 500) {
console.error("Unexpected error:", err.stack);
}
res.status(status).json({ error: message });
}
module.exports = errorHandler;
Test each branch:
var request = require("supertest");
var express = require("express");
var errorHandler = require("../middleware/errorHandler");
describe("Error Handler", function() {
var app;
beforeEach(function() {
app = express();
app.get("/validation-error", function(req, res, next) {
var err = new Error("Invalid input");
err.code = "VALIDATION_ERROR";
err.details = [{ field: "email", message: "required" }];
next(err);
});
app.get("/not-found", function(req, res, next) {
var err = new Error("User not found");
err.code = "NOT_FOUND";
next(err);
});
app.get("/unexpected", function(req, res, next) {
next(new Error("Database connection lost"));
});
app.get("/custom-status", function(req, res, next) {
var err = new Error("Forbidden");
err.status = 403;
next(err);
});
app.use(errorHandler);
});
it("handles validation errors with 400", function(done) {
request(app)
.get("/validation-error")
.expect(400)
.expect(function(res) {
if (res.body.error !== "Validation failed") throw new Error("Wrong error");
if (!res.body.details) throw new Error("Missing details");
})
.end(done);
});
it("handles not found errors with 404", function(done) {
request(app)
.get("/not-found")
.expect(404)
.end(done);
});
it("handles unexpected errors with 500", function(done) {
request(app)
.get("/unexpected")
.expect(500)
.expect(function(res) {
if (res.body.error !== "Database connection lost") throw new Error("Wrong message");
})
.end(done);
});
it("respects custom status codes", function(done) {
request(app)
.get("/custom-status")
.expect(403)
.end(done);
});
});
Covering Promise Rejection Paths
Uncovered rejection paths are the most common source of coverage gaps in async code:
// services/userService.js
var db = require("../db");
function getUserOrders(userId) {
return db.query("SELECT * FROM users WHERE id = $1", [userId])
.then(function(result) {
if (result.rows.length === 0) {
throw { status: 404, message: "User not found" };
}
return db.query("SELECT * FROM orders WHERE user_id = $1", [userId]);
})
.then(function(result) {
return result.rows;
})
.catch(function(err) {
if (err.status) {
throw err; // Re-throw known errors
}
// Wrap unknown database errors
throw { status: 500, message: "Failed to fetch orders", cause: err };
});
}
To cover both the success path and the database failure path, you need to mock db.query to reject:
var sinon = require("sinon");
var db = require("../db");
var userService = require("../services/userService");
describe("getUserOrders", function() {
afterEach(function() {
sinon.restore();
});
it("returns orders for existing user", function() {
sinon.stub(db, "query")
.onFirstCall().resolves({ rows: [{ id: 1, name: "Shane" }] })
.onSecondCall().resolves({ rows: [{ id: 10, total: 99.99 }] });
return userService.getUserOrders(1).then(function(orders) {
if (orders.length !== 1) throw new Error("Expected 1 order");
});
});
it("throws 404 for missing user", function() {
sinon.stub(db, "query").resolves({ rows: [] });
return userService.getUserOrders(999)
.then(function() { throw new Error("Should have thrown"); })
.catch(function(err) {
if (err.status !== 404) throw new Error("Expected 404");
});
});
it("wraps database errors", function() {
sinon.stub(db, "query").rejects(new Error("ECONNREFUSED"));
return userService.getUserOrders(1)
.then(function() { throw new Error("Should have thrown"); })
.catch(function(err) {
if (err.status !== 500) throw new Error("Expected 500");
if (!err.cause) throw new Error("Expected cause");
});
});
});
Combining Unit and Integration Test Coverage
Most projects run unit tests and integration tests separately. You want a single combined coverage report. Both nyc and c8 support merging coverage data.
With nyc
{
"scripts": {
"test:unit": "nyc --no-clean mocha test/unit/",
"test:integration": "nyc --no-clean mocha test/integration/",
"test:all": "npm run test:unit && npm run test:integration",
"coverage:merge": "nyc merge .nyc_output coverage/merged.json",
"coverage:report": "nyc report --temp-dir .nyc_output --reporter=html --reporter=text",
"test:ci": "npm run test:all && npm run coverage:report"
}
}
The --no-clean flag is essential. Without it, each test run wipes the previous coverage data. With --no-clean, coverage accumulates across runs.
With c8
c8 handles this more elegantly. Just run all tests in a single process:
npx c8 npm run test:all
If you need separate runs, c8 supports merging via the --merge-async flag or by pointing at multiple V8 coverage directories:
{
"scripts": {
"test:unit": "c8 --temp-directory .c8_unit node --test test/unit/",
"test:integration": "c8 --temp-directory .c8_integration node --test test/integration/",
"coverage:merge": "c8 report --temp-directory .c8_unit --temp-directory .c8_integration --reporter=html --reporter=text"
}
}
Coverage in Monorepos
Monorepos add complexity because you need per-package coverage that rolls up into an aggregate report.
Workspace-Level Coverage
With npm workspaces:
{
"scripts": {
"test:coverage": "npx c8 --all --include='packages/*/src/**' npm test --workspaces"
}
}
Per-Package Thresholds with nyc
nyc supports per-file thresholds via the per-file option, but for per-package thresholds you need a wrapper script:
// scripts/check-package-coverage.js
var fs = require("fs");
var path = require("path");
var thresholds = {
"packages/api": { branches: 80, lines: 85 },
"packages/auth": { branches: 90, lines: 90 },
"packages/utils": { branches: 70, lines: 75 },
"packages/worker": { branches: 60, lines: 70 }
};
var summary = JSON.parse(
fs.readFileSync(path.join(__dirname, "..", "coverage", "coverage-summary.json"), "utf8")
);
var failed = false;
Object.keys(thresholds).forEach(function(pkg) {
var pkgThresholds = thresholds[pkg];
Object.keys(summary).forEach(function(filePath) {
if (filePath === "total") return;
if (!filePath.includes(pkg)) return;
var data = summary[filePath];
if (data.branches.pct < pkgThresholds.branches) {
console.error(
"FAIL: " + filePath + " branches " + data.branches.pct +
"% < " + pkgThresholds.branches + "% threshold"
);
failed = true;
}
});
});
if (failed) {
process.exit(1);
} else {
console.log("All packages meet coverage thresholds.");
}
Common Coverage Pitfalls
The 100% Coverage Myth
I have seen teams mandate 100% line coverage. The result is always the same: tests that exist only to execute code, not to verify behavior. You get tests like this:
// This test covers the line but verifies nothing
it("calls processOrder", function() {
processOrder({ items: [{ price: 10 }] });
// No assertion. The line was executed. Coverage goes up.
});
This is worse than no test because it gives false confidence. The line is green in the report but nobody verified the output. I have shipped bugs in 100%-covered codebases.
Healthy targets: 75-85% line coverage, 65-80% branch coverage. Those numbers indicate a team that tests meaningful behavior without chasing metrics.
Gaming Coverage with Trivial Tests
When coverage is tied to performance reviews or merge gates set too aggressively, engineers game the metric. Watch for:
- Tests with no assertions
- Tests that call a function but ignore the return value
- Tests that mock every dependency so the "unit" being tested is a passthrough
- Snapshot tests that cover template rendering without testing logic
Ignoring Branch Coverage
A codebase with 90% line coverage and 45% branch coverage is poorly tested. The high line coverage masks the fact that error paths, edge cases, and fallback logic are completely untested. Always track branch coverage separately and set a threshold for it.
Coverage Decay
Coverage declines slowly as features are added without corresponding tests. A ratchet mechanism prevents this:
# In CI: fail if coverage drops from the committed baseline
npx c8 --check-coverage --lines 82 --branches 71 npm test
Update these numbers after each release cycle. Never lower them.
Complete Working Example
Here is a complete Express.js API project with both nyc and c8 configurations, CI enforcement, and coverage badge generation.
Project Structure
express-coverage-demo/
src/
app.js
routes/
health.js
users.js
middleware/
validate.js
auth.js
services/
userService.js
test/
unit/
userService.test.js
validate.test.js
integration/
users.test.js
.c8rc.json
.nycrc.json
package.json
.github/
workflows/
test.yml
Source Code
// src/app.js
var express = require("express");
var healthRoutes = require("./routes/health");
var userRoutes = require("./routes/users");
var validate = require("./middleware/validate");
var app = express();
app.use(express.json());
app.use("/health", healthRoutes);
app.use("/users", validate.contentType, userRoutes);
app.use(function(err, req, res, next) {
var status = err.status || 500;
res.status(status).json({ error: err.message });
});
module.exports = app;
// src/routes/users.js
var express = require("express");
var router = express.Router();
var userService = require("../services/userService");
var validate = require("../middleware/validate");
router.get("/", function(req, res, next) {
var limit = parseInt(req.query.limit, 10) || 20;
var offset = parseInt(req.query.offset, 10) || 0;
userService.listUsers(limit, offset)
.then(function(users) {
res.json({ data: users, limit: limit, offset: offset });
})
.catch(next);
});
router.get("/:id", function(req, res, next) {
var id = parseInt(req.params.id, 10);
if (isNaN(id)) {
return res.status(400).json({ error: "Invalid user ID" });
}
userService.getUserById(id)
.then(function(user) {
if (!user) {
return res.status(404).json({ error: "User not found" });
}
res.json({ data: user });
})
.catch(next);
});
router.post("/", validate.body(["name", "email"]), function(req, res, next) {
userService.createUser(req.body)
.then(function(user) {
res.status(201).json({ data: user });
})
.catch(function(err) {
if (err.code === "DUPLICATE_EMAIL") {
return res.status(409).json({ error: "Email already exists" });
}
next(err);
});
});
module.exports = router;
// src/middleware/validate.js
function contentType(req, res, next) {
if (req.method === "GET" || req.method === "DELETE") {
return next();
}
var ct = req.headers["content-type"];
if (!ct || ct.indexOf("application/json") === -1) {
return res.status(415).json({ error: "Content-Type must be application/json" });
}
next();
}
function body(requiredFields) {
return function(req, res, next) {
var missing = [];
for (var i = 0; i < requiredFields.length; i++) {
if (!req.body[requiredFields[i]]) {
missing.push(requiredFields[i]);
}
}
if (missing.length > 0) {
return res.status(400).json({
error: "Missing required fields",
fields: missing
});
}
next();
};
}
module.exports = { contentType: contentType, body: body };
// src/services/userService.js
var users = [
{ id: 1, name: "Shane", email: "[email protected]" },
{ id: 2, name: "Jordan", email: "[email protected]" }
];
var nextId = 3;
function listUsers(limit, offset) {
return Promise.resolve(users.slice(offset, offset + limit));
}
function getUserById(id) {
var user = users.find(function(u) { return u.id === id; });
return Promise.resolve(user || null);
}
function createUser(data) {
var existing = users.find(function(u) { return u.email === data.email; });
if (existing) {
var err = new Error("Duplicate email");
err.code = "DUPLICATE_EMAIL";
return Promise.reject(err);
}
var user = { id: nextId++, name: data.name, email: data.email };
users.push(user);
return Promise.resolve(user);
}
module.exports = {
listUsers: listUsers,
getUserById: getUserById,
createUser: createUser
};
Test Files
// test/integration/users.test.js
var request = require("supertest");
var app = require("../../src/app");
describe("Users API", function() {
describe("GET /users", function() {
it("returns a list of users", function(done) {
request(app)
.get("/users")
.expect(200)
.expect(function(res) {
if (!Array.isArray(res.body.data)) throw new Error("Expected array");
if (res.body.data.length === 0) throw new Error("Expected users");
})
.end(done);
});
it("respects limit parameter", function(done) {
request(app)
.get("/users?limit=1")
.expect(200)
.expect(function(res) {
if (res.body.data.length > 1) throw new Error("Expected at most 1");
if (res.body.limit !== 1) throw new Error("Expected limit=1");
})
.end(done);
});
});
describe("GET /users/:id", function() {
it("returns user by ID", function(done) {
request(app)
.get("/users/1")
.expect(200)
.expect(function(res) {
if (res.body.data.name !== "Shane") throw new Error("Wrong user");
})
.end(done);
});
it("returns 404 for missing user", function(done) {
request(app)
.get("/users/9999")
.expect(404)
.end(done);
});
it("returns 400 for invalid ID", function(done) {
request(app)
.get("/users/abc")
.expect(400)
.end(done);
});
});
describe("POST /users", function() {
it("creates a new user", function(done) {
request(app)
.post("/users")
.set("Content-Type", "application/json")
.send({ name: "Alex", email: "[email protected]" })
.expect(201)
.expect(function(res) {
if (res.body.data.name !== "Alex") throw new Error("Wrong name");
if (!res.body.data.id) throw new Error("Missing ID");
})
.end(done);
});
it("rejects duplicate email", function(done) {
request(app)
.post("/users")
.set("Content-Type", "application/json")
.send({ name: "Shane2", email: "[email protected]" })
.expect(409)
.end(done);
});
it("rejects missing fields", function(done) {
request(app)
.post("/users")
.set("Content-Type", "application/json")
.send({ name: "NoEmail" })
.expect(400)
.expect(function(res) {
if (!res.body.fields) throw new Error("Expected fields list");
})
.end(done);
});
it("rejects wrong content type", function(done) {
request(app)
.post("/users")
.set("Content-Type", "text/plain")
.send("name=test")
.expect(415)
.end(done);
});
});
});
Configuration Files
// .c8rc.json
{
"all": true,
"src": ["src"],
"include": ["src/**/*.js"],
"exclude": ["test/**", "node_modules/**", "coverage/**"],
"reporter": ["text", "html", "lcov"],
"report-dir": "./coverage",
"check-coverage": true,
"lines": 85,
"branches": 75,
"functions": 85,
"statements": 85
}
// .nycrc.json
{
"all": true,
"include": ["src/**/*.js"],
"exclude": ["test/**", "node_modules/**", "coverage/**"],
"reporter": ["text", "html", "lcov"],
"report-dir": "./coverage",
"check-coverage": true,
"branches": 75,
"lines": 85,
"functions": 85,
"statements": 85,
"watermarks": {
"lines": [70, 85],
"functions": [70, 85],
"branches": [65, 80],
"statements": [70, 85]
}
}
// package.json (relevant scripts)
{
"scripts": {
"test": "mocha --recursive test/",
"test:c8": "c8 npm test",
"test:nyc": "nyc npm test",
"test:ci": "c8 --check-coverage npm test",
"coverage:badge": "node scripts/coverage-badge.js"
}
}
CI Workflow with Badge Generation
# .github/workflows/test.yml
name: Tests & Coverage
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- name: Run tests with coverage enforcement
run: npx c8 --check-coverage --lines 85 --branches 75 --functions 85 npm test
- name: Generate coverage badge
if: github.ref == 'refs/heads/main' && success()
run: |
COVERAGE=$(npx c8 report --reporter=text-summary 2>/dev/null | grep 'Lines' | awk '{print $3}' | tr -d '%')
echo "Coverage: ${COVERAGE}%"
if [ $(echo "$COVERAGE >= 90" | bc -l) -eq 1 ]; then
COLOR="brightgreen"
elif [ $(echo "$COVERAGE >= 80" | bc -l) -eq 1 ]; then
COLOR="green"
elif [ $(echo "$COVERAGE >= 70" | bc -l) -eq 1 ]; then
COLOR="yellow"
else
COLOR="red"
fi
curl -o coverage-badge.svg "https://img.shields.io/badge/coverage-${COVERAGE}%25-${COLOR}"
- name: Upload coverage
if: always()
uses: codecov/codecov-action@v4
with:
file: ./coverage/lcov.info
- name: Archive coverage report
if: always()
uses: actions/upload-artifact@v4
with:
name: coverage-report
path: coverage/
retention-days: 30
Running the test suite produces output like this:
$ npx c8 npm test
Users API
GET /users
✓ returns a list of users (45ms)
✓ respects limit parameter
GET /users/:id
✓ returns user by ID
✓ returns 404 for missing user
✓ returns 400 for invalid ID
POST /users
✓ creates a new user
✓ rejects duplicate email
✓ rejects missing fields
✓ rejects wrong content type
9 passing (187ms)
--------------------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
--------------------|---------|----------|---------|---------|-------------------
All files | 95.65 | 87.50 | 100.00 | 95.65 |
src/ | 100.00 | 100.00 | 100.00 | 100.00 |
app.js | 100.00 | 100.00 | 100.00 | 100.00 |
src/middleware/ | 100.00 | 100.00 | 100.00 | 100.00 |
validate.js | 100.00 | 100.00 | 100.00 | 100.00 |
src/routes/ | 93.33 | 83.33 | 100.00 | 93.33 |
health.js | 100.00 | 100.00 | 100.00 | 100.00 |
users.js | 91.67 | 80.00 | 100.00 | 91.67 | 12
src/services/ | 92.86 | 75.00 | 100.00 | 92.86 |
userService.js | 92.86 | 75.00 | 100.00 | 92.86 | 22
--------------------|---------|----------|---------|---------|-------------------
Common Issues and Troubleshooting
1. Coverage Shows 0% for All Files
Error:
--------------------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
--------------------|---------|----------|---------|---------|-------------------
All files | 0 | 0 | 0 | 0 |
--------------------|---------|----------|---------|---------|-------------------
Cause: The include patterns do not match your file paths. This commonly happens when your include says src/**/*.js but your files are in routes/ and models/.
Fix: Verify your paths match the project structure. Run npx c8 --all --include='**/*.js' npm test with broad patterns first, then narrow down.
2. nyc Reports Different Numbers Than c8
Error: nyc reports 92% line coverage, c8 reports 87% for the same test suite.
Cause: nyc instruments source code and counts synthetic statements. c8 uses V8's native counters which are more granular. c8 counts branches inside ternaries, short-circuit operators, and default parameter values that nyc may miss or count differently.
Fix: This is expected behavior. Pick one tool and standardize on it. Do not mix metrics from different tools in the same report.
3. Coverage Drops After Upgrading Node.js
Error:
ERROR: Coverage for branches (68%) does not meet global threshold (75%)
Cause: Newer V8 versions detect more branch points. Node.js 20 counts branches in optional chaining (?.) and nullish coalescing (??) that Node.js 16 did not. Your code has not changed, but V8 now sees more untested paths.
Fix: Audit the new uncovered branches. They are usually real gaps. Adjust thresholds temporarily if needed, but write the missing tests within the sprint.
4. Istanbul Ignore Comments Not Working with c8
Error: Lines marked with /* istanbul ignore next */ still appear as uncovered in c8 reports.
Cause: c8 v7 and earlier only respected /* c8 ignore */ comments, not Istanbul-style comments. c8 v8+ supports both, but you may be running an older version.
Fix: Either upgrade c8 (npm install --save-dev c8@latest) or use c8's native comment syntax:
/* c8 ignore next */
var fallback = process.env.FALLBACK || "default";
/* c8 ignore start */
if (process.env.NODE_ENV === "development") {
enableDevTools();
}
/* c8 ignore stop */
5. Coverage Data Missing for Files Required Inside Tests
Error: Files loaded dynamically with require() inside test callbacks do not appear in coverage.
Cause: With nyc, if require() happens before nyc's instrumentation hook is registered, the file is loaded without counters. With c8, this is usually not an issue since V8 coverage is process-wide.
Fix: For nyc, make sure you run tests via npx nyc mocha and not mocha followed by nyc report. The instrumentation hook must be active during require().
6. Merge Conflicts in coverage-summary.json
Error: Git merge conflicts in coverage/coverage-summary.json that was accidentally committed.
Fix: Add coverage output to .gitignore immediately:
# Coverage
coverage/
.nyc_output/
.c8_output/
*.lcov
Then remove the tracked files:
git rm -r --cached coverage/
git rm -r --cached .nyc_output/
git commit -m "Remove coverage artifacts from tracking"
Best Practices
Track branch coverage as your primary metric. Line and statement coverage create a false sense of security. Branch coverage exposes untested conditional logic, error paths, and edge cases where bugs actually live.
Set thresholds based on current reality, then ratchet upward. Starting with aggressive thresholds on an untested codebase incentivizes bad tests. Measure where you are, set the threshold 5% below that, and increase quarterly.
Use
all: truein your coverage configuration. Without it, files that are never imported during tests are invisible. You want those zeros staring at you in the report.Run coverage in CI with
--check-coverageand fail the build. Thresholds without enforcement are suggestions. Wire them into your merge gate so coverage cannot silently decay.Use
c8for new projects,nycfor established ones. c8 is faster, more accurate, and has a simpler mental model. But migrating a project with extensive.nycrcconfiguration and Istanbul ignore comments is not worth the churn unless you are hitting real nyc bugs.Never chase 100% coverage. Diminishing returns hit hard above 85%. The last 15% is usually defensive programming, platform-specific branches, and error handlers for conditions you cannot easily simulate. Covering them leads to brittle, over-mocked tests.
Review coverage reports during code review. Do not just check the percentage. Open the HTML report, look at the file diff, and ask: "Are the uncovered lines acceptable?" Sometimes they are. Sometimes they reveal that the entire error handling path is untested.
Exclude generated code, migrations, and configuration files. Coverage for database migrations and webpack configs is noise. Be explicit about what is in scope and what is not.
Combine unit and integration test coverage. Neither alone gives the full picture. Unit tests cover internal logic. Integration tests cover the wiring between components. Merge the coverage data for an accurate view.
Treat coverage drops in PRs as code review signals. If a PR adds 200 lines and coverage drops by 3%, the new code is undertested. That is worth a conversation, even if global thresholds still pass.
