Integrating Security Scanning into Azure Pipelines
A practical guide to integrating security scanning into Azure DevOps pipelines, covering dependency scanning, SAST, secret detection, container image scanning, and configuring vulnerability gates.
Integrating Security Scanning into Azure Pipelines
Overview
Security scanning in your CI/CD pipeline is not optional — it is the difference between catching a critical vulnerability before it ships and finding out about it from a security researcher's public disclosure. Integrating automated scans directly into Azure Pipelines means every pull request and every build gets checked for dependency vulnerabilities, leaked secrets, insecure code patterns, and container image weaknesses. This article walks through exactly how to set up each type of scan, wire them together as pipeline gates, and manage the inevitable false positives that come with automated tooling.
Prerequisites
- An Azure DevOps organization with at least one project
- A Node.js project with a
package.jsonandpackage-lock.json - A Dockerfile (for container image scanning sections)
- Basic familiarity with YAML pipeline syntax
- An Azure DevOps agent with Docker installed (Microsoft-hosted
ubuntu-latestworks) - Admin or Build Admin permissions on the pipeline
Why Shift-Left Security Matters
The term "shift-left" gets thrown around a lot, but the concept is straightforward: find problems earlier in the development lifecycle where they are cheaper to fix. A vulnerability discovered during a code review costs almost nothing. That same vulnerability discovered in a production incident costs orders of magnitude more — in engineering time, customer trust, and potentially regulatory fines.
Traditional security scanning happened quarterly, run by a separate security team against production code. By the time findings reached developers, the code was months old. Nobody remembered why that dependency was added. The fix required context that had evaporated.
Pipeline-integrated scanning flips this. The developer who wrote the code sees the finding immediately, in the same pull request. The context is fresh. The fix is a one-line change to package.json, not a cross-team remediation project.
I have seen teams go from "we do a security review before major releases" to "every PR gets automated scanning" and the number of vulnerabilities that reach production drops by 80-90%. That is not a theoretical improvement — that is real-world data from teams I have worked with.
Types of Security Scanning
Before diving into configuration, let me lay out the categories of scanning and what each one catches.
Software Composition Analysis (SCA / Dependency Scanning)
SCA tools check your third-party dependencies against known vulnerability databases. Your application code might be perfect, but if you depend on a library with a critical CVE, you are vulnerable.
Tools covered: npm audit, Snyk, OWASP Dependency-Check
What it catches: Known CVEs in direct and transitive dependencies, outdated packages with security patches available.
Static Application Security Testing (SAST)
SAST analyzes your source code without executing it, looking for patterns that indicate security weaknesses — SQL injection, cross-site scripting, insecure cryptography, and similar issues.
Tools covered: ESLint security plugins, SonarQube
What it catches: Code-level vulnerabilities like injection flaws, insecure random number generation, hardcoded credentials, prototype pollution.
Secret Detection
Scans your codebase and commit history for leaked credentials — API keys, passwords, connection strings, tokens. This is one of the highest-value scans you can run because a leaked secret in a public repository can be exploited within minutes.
Tools covered: detect-secrets, gitleaks, custom regex patterns
What it catches: AWS keys, Azure connection strings, database passwords, API tokens, private keys committed to source control.
Container Image Scanning
If you build Docker images, scanning the resulting image catches vulnerabilities in the base image OS packages, embedded libraries, and misconfigurations in the image itself.
Tools covered: Trivy, Azure Defender for Containers
What it catches: CVEs in OS packages (Alpine, Debian, Ubuntu), vulnerabilities in language-specific packages installed in the image, misconfigurations like running as root.
License Compliance
Not strictly a security scan, but often run alongside security tools. License compliance scanning checks that your dependency tree does not include packages with licenses that are incompatible with your project's licensing requirements.
What it catches: GPL dependencies in proprietary code, AGPL libraries in SaaS applications, packages with no declared license.
Dependency Vulnerability Scanning
npm audit
The simplest starting point is npm audit, which is built into npm and requires zero additional tooling.
# azure-pipelines.yml
steps:
- task: NodeTool@0
inputs:
versionSpec: '20.x'
displayName: 'Install Node.js'
- script: npm ci
displayName: 'Install dependencies'
- script: |
npm audit --audit-level=critical --json > $(Build.ArtifactStagingDirectory)/npm-audit.json
npm audit --audit-level=critical
displayName: 'Run npm audit'
continueOnError: true
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)/npm-audit.json'
artifactName: 'security-reports'
displayName: 'Publish audit results'
The --audit-level=critical flag means the command only exits with a non-zero code if there are critical vulnerabilities. You can set this to high, moderate, or low depending on your tolerance.
Realistic output from npm audit:
# npm audit report
lodash <4.17.21
Severity: critical
Prototype Pollution in lodash - https://github.com/advisories/GHSA-jf85-cpcp-j695
Command Injection in lodash - https://github.com/advisories/GHSA-35jh-r3h4-6jhm
fix available via `npm audit fix`
node_modules/lodash
jsonwebtoken <=8.5.1
Severity: high
Improper Restriction of Security Token Assignment - https://github.com/advisories/GHSA-hjrf-2m68-5959
fix available via `npm audit fix --force`
node_modules/jsonwebtoken
3 vulnerabilities (1 high, 2 critical)
Limitation of npm audit: It only covers npm packages. If your project has Python dependencies, Go modules, or system-level packages, you need additional tools.
Snyk Integration
Snyk provides deeper analysis than npm audit, including reachability analysis (is the vulnerable function actually called in your code?) and fix PRs.
steps:
- script: |
npm install -g snyk
snyk auth $(SNYK_TOKEN)
displayName: 'Install and authenticate Snyk'
- script: |
snyk test --severity-threshold=high --json-file-output=$(Build.ArtifactStagingDirectory)/snyk-report.json
displayName: 'Run Snyk dependency scan'
continueOnError: true
- script: |
snyk monitor --project-name="$(Build.Repository.Name)"
displayName: 'Upload snapshot to Snyk dashboard'
condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')
The SNYK_TOKEN variable should be stored as a secret variable in your pipeline or pulled from Azure Key Vault.
OWASP Dependency-Check
OWASP Dependency-Check is a free, open-source tool that checks dependencies against the National Vulnerability Database (NVD). It supports multiple ecosystems and is particularly thorough.
steps:
- task: dependency-check-build-task@6
inputs:
projectName: '$(Build.Repository.Name)'
scanPath: '$(Build.SourcesDirectory)'
format: 'HTML,JSON'
failOnCVSS: '7'
additionalArguments: >-
--enableExperimental
--suppression $(Build.SourcesDirectory)/owasp-suppressions.xml
displayName: 'OWASP Dependency-Check'
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.SourcesDirectory)/dependency-check-report.html'
artifactName: 'dependency-check-report'
displayName: 'Publish OWASP report'
The failOnCVSS parameter is your threshold. CVSS scores range from 0 to 10 — setting it to 7 means any vulnerability with a CVSS score of 7.0 or higher (high or critical) will fail the build.
Static Analysis with ESLint Security Plugins
ESLint is already in most Node.js projects. Adding security-focused plugins gives you SAST capabilities with minimal setup.
Setting Up ESLint Security Rules
Install the security plugin:
npm install --save-dev eslint-plugin-security
Add it to your .eslintrc.json:
{
"plugins": ["security"],
"extends": ["plugin:security/recommended"],
"rules": {
"security/detect-eval-with-expression": "error",
"security/detect-non-literal-fs-filename": "warn",
"security/detect-non-literal-require": "warn",
"security/detect-possible-timing-attacks": "warn",
"security/detect-object-injection": "warn",
"security/detect-child-process": "warn",
"security/detect-no-csrf-before-method-override": "error",
"security/detect-buffer-noassert": "error",
"security/detect-unsafe-regex": "error"
}
}
Pipeline Integration
steps:
- script: |
npx eslint . --format json --output-file $(Build.ArtifactStagingDirectory)/eslint-security.json || true
npx eslint . --format stylish
displayName: 'Run ESLint security analysis'
continueOnError: true
SonarQube Integration
For teams that want deeper SAST analysis, SonarQube provides comprehensive code analysis including security hotspots. You need the SonarQube extension installed in your Azure DevOps organization and a SonarQube server.
steps:
- task: SonarQubePrepare@5
inputs:
SonarQube: 'sonarqube-connection'
scannerMode: 'CLI'
configMode: 'manual'
cliProjectKey: '$(Build.Repository.Name)'
cliSources: 'src'
extraProperties: |
sonar.javascript.lcov.reportPaths=coverage/lcov.info
sonar.security.hotspots.reviewed=true
- script: npm test -- --coverage
displayName: 'Run tests with coverage'
- task: SonarQubeAnalyze@5
displayName: 'Run SonarQube analysis'
- task: SonarQubePublish@5
inputs:
pollingTimeoutSec: '300'
displayName: 'Publish SonarQube results'
Secret Detection
Leaked secrets are one of the most damaging security failures because they require zero sophistication to exploit. Someone finds your AWS key in a commit, and they are mining cryptocurrency on your account within the hour.
Using detect-secrets
detect-secrets is a Python-based tool from Yelp that scans for high-entropy strings and known secret patterns.
steps:
- script: |
pip install detect-secrets
detect-secrets scan --all-files --force-use-all-plugins > $(Build.ArtifactStagingDirectory)/secrets-baseline.json
detect-secrets audit --report --json $(Build.ArtifactStagingDirectory)/secrets-baseline.json
displayName: 'Run secret detection'
Using gitleaks
Gitleaks is faster and purpose-built for CI/CD. It scans both the current state of files and the git history.
steps:
- script: |
curl -sSfL https://github.com/gitleaks/gitleaks/releases/download/v8.18.0/gitleaks_8.18.0_linux_x64.tar.gz | tar xz
chmod +x gitleaks
./gitleaks detect --source=$(Build.SourcesDirectory) --report-format=json --report-path=$(Build.ArtifactStagingDirectory)/gitleaks-report.json --verbose
displayName: 'Run gitleaks secret detection'
Realistic gitleaks output when it finds something:
Finding: AKIAIOSFODNN7EXAMPLE
Secret: AKIAIOSFODNN7EXAMPLE
RuleID: aws-access-key-id
Entropy: 3.684
File: config/aws-config.js
Line: 12
Commit: a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2
Author: [email protected]
Date: 2025-08-15T14:32:01Z
Fingerprint: a1b2c3d4e5f6:config/aws-config.js:aws-access-key-id:12
Finding: mongodb+srv://admin:P@[email protected]
Secret: P@ssw0rd123
RuleID: generic-credential
Entropy: 3.214
File: src/db.js
Line: 5
Commit: b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3
Author: [email protected]
Date: 2025-09-02T09:15:44Z
2 leaks found in 847 commits
Suppression for False Positives
Create a .gitleaksignore file in your repository root to suppress known false positives:
# .gitleaksignore
# Test fixtures with fake credentials
a1b2c3d4e5f6:test/fixtures/mock-config.js:generic-credential:8
# Example code in documentation
c3d4e5f6a1b2:docs/examples/sample-connection.md:generic-credential:15
Custom Secret Patterns
You can extend gitleaks to detect company-specific patterns. Create a .gitleaks.toml:
# .gitleaks.toml
title = "Custom gitleaks config"
[extend]
useDefault = true
[[rules]]
id = "azure-devops-pat"
description = "Azure DevOps Personal Access Token"
regex = '''[a-z2-7]{52}'''
keywords = ["azure","devops","pat"]
[[rules]]
id = "internal-api-key"
description = "Internal API key format"
regex = '''mycompany_[a-zA-Z0-9]{32}'''
keywords = ["mycompany"]
Container Image Scanning with Trivy
Trivy is the go-to open-source container scanner. It is fast, accurate, and has excellent CVE coverage. Unlike some alternatives, Trivy requires no server-side component — it runs entirely in the pipeline.
Basic Trivy Integration
steps:
- task: Docker@2
inputs:
command: 'build'
dockerfile: '$(Build.SourcesDirectory)/Dockerfile'
tags: '$(Build.BuildId)'
repository: 'myapp'
displayName: 'Build Docker image'
- script: |
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin v0.50.0
trivy image --format json --output $(Build.ArtifactStagingDirectory)/trivy-report.json myapp:$(Build.BuildId)
trivy image --severity CRITICAL,HIGH --exit-code 1 myapp:$(Build.BuildId)
displayName: 'Scan container image with Trivy'
The key flags:
--severity CRITICAL,HIGHfilters to only show high and critical findings--exit-code 1makes Trivy return a non-zero exit code when vulnerabilities matching the severity filter are found, which fails the pipeline step
Realistic Trivy output:
myapp:42 (debian 12.4)
Total: 14 (HIGH: 11, CRITICAL: 3)
┌──────────────────┬────────────────┬──────────┬────────────────┬───────────────┬──────────────────────────────────────┐
│ Library │ Vulnerability │ Severity │ Installed Ver │ Fixed Ver │ Title │
├──────────────────┼────────────────┼──────────┼────────────────┼───────────────┼──────────────────────────────────────┤
│ libssl3 │ CVE-2024-5535 │ CRITICAL │ 3.0.11-1 │ 3.0.13-1 │ openssl: SSL_select_next_proto... │
│ libexpat1 │ CVE-2024-45490 │ CRITICAL │ 2.5.0-1 │ 2.5.0-1+deb12 │ libexpat: Negative Length Parsing... │
│ curl │ CVE-2024-2398 │ HIGH │ 7.88.1-10 │ 7.88.1-10+d12 │ curl: HTTP/2 push headers memory... │
│ zlib1g │ CVE-2023-45853 │ HIGH │ 1:1.2.13 │ │ MiniZip: integer overflow in zip... │
└──────────────────┴────────────────┴──────────┴────────────────┴───────────────┴──────────────────────────────────────┘
Node.js (package-lock.json)
Total: 3 (HIGH: 2, CRITICAL: 1)
┌──────────────────┬────────────────┬──────────┬────────────────┬───────────────┬──────────────────────────────────────┐
│ Library │ Vulnerability │ Severity │ Installed Ver │ Fixed Ver │ Title │
├──────────────────┼────────────────┼──────────┼────────────────┼───────────────┼──────────────────────────────────────┤
│ express │ CVE-2024-29041 │ HIGH │ 4.18.2 │ 4.19.2 │ express: Open Redirect... │
│ semver │ CVE-2022-25883 │ HIGH │ 7.3.7 │ 7.5.2 │ semver: Regular Expression Denial.. │
└──────────────────┴────────────────┴──────────┴────────────────┴───────────────┴──────────────────────────────────────┘
Trivy Suppression with .trivyignore
Not every CVE is exploitable in your context. Create a .trivyignore file to suppress specific findings:
# .trivyignore
# zlib vulnerability - not exploitable in our use case, no fix available
CVE-2023-45853
# Disputed CVE, upstream has rejected the report
CVE-2024-12345
Integrating Scans as Pipeline Gates
Running scans is only half the battle. You need to configure them as gates that actually block the build when critical findings are discovered.
Vulnerability Threshold Strategy
Here is the strategy I recommend and have used across multiple teams:
| Severity | Action | Rationale |
|---|---|---|
| Critical | Fail the build | Non-negotiable. Critical CVEs are actively exploited. |
| High | Warn and create work item | Should be fixed within the sprint. |
| Medium | Report only | Track in dashboards, address in scheduled maintenance. |
| Low | Ignore in pipeline | Handle during quarterly reviews. |
Implementing the Gate
steps:
- script: |
CRITICAL_COUNT=$(npm audit --json 2>/dev/null | node -e "
var data = '';
process.stdin.on('data', function(chunk) { data += chunk; });
process.stdin.on('end', function() {
var report = JSON.parse(data);
var meta = report.metadata || {};
var vulns = meta.vulnerabilities || {};
console.log(vulns.critical || 0);
});
")
HIGH_COUNT=$(npm audit --json 2>/dev/null | node -e "
var data = '';
process.stdin.on('data', function(chunk) { data += chunk; });
process.stdin.on('end', function() {
var report = JSON.parse(data);
var meta = report.metadata || {};
var vulns = meta.vulnerabilities || {};
console.log(vulns.high || 0);
});
")
echo "Critical: $CRITICAL_COUNT, High: $HIGH_COUNT"
echo "##vso[task.setvariable variable=criticalVulns]$CRITICAL_COUNT"
echo "##vso[task.setvariable variable=highVulns]$HIGH_COUNT"
if [ "$CRITICAL_COUNT" -gt 0 ]; then
echo "##vso[task.logissue type=error]Found $CRITICAL_COUNT critical vulnerabilities. Build will fail."
exit 1
fi
if [ "$HIGH_COUNT" -gt 0 ]; then
echo "##vso[task.logissue type=warning]Found $HIGH_COUNT high vulnerabilities. Please address soon."
fi
displayName: 'Evaluate vulnerability thresholds'
Publishing Results to Pipeline Dashboard
Azure Pipelines supports publishing test results in JUnit format, which gives you a native test results tab in the build summary.
Write a small script that converts scan output to JUnit XML:
// scripts/audit-to-junit.js
var fs = require('fs');
var auditJson = fs.readFileSync(process.argv[2], 'utf8');
var audit = JSON.parse(auditJson);
var testCases = [];
var vulnerabilities = audit.vulnerabilities || {};
Object.keys(vulnerabilities).forEach(function(name) {
var vuln = vulnerabilities[name];
var severity = vuln.severity;
var title = vuln.via && vuln.via[0] ? (vuln.via[0].title || name) : name;
var url = vuln.via && vuln.via[0] ? (vuln.via[0].url || '') : '';
var testCase = ' <testcase name="' + name + ' (' + severity + ')" classname="npm-audit">';
if (severity === 'critical' || severity === 'high') {
testCase += '\n <failure message="' + severity.toUpperCase() + ': ' + title + '">';
testCase += '\nPackage: ' + name;
testCase += '\nSeverity: ' + severity;
testCase += '\nRange: ' + (vuln.range || 'unknown');
testCase += '\nAdvisory: ' + url;
testCase += '\n </failure>';
}
testCase += '\n </testcase>';
testCases.push(testCase);
});
var total = testCases.length;
var failures = Object.keys(vulnerabilities).filter(function(name) {
var s = vulnerabilities[name].severity;
return s === 'critical' || s === 'high';
}).length;
var xml = '<?xml version="1.0" encoding="UTF-8"?>\n';
xml += '<testsuite name="npm-audit" tests="' + total + '" failures="' + failures + '">\n';
xml += testCases.join('\n');
xml += '\n</testsuite>';
fs.writeFileSync(process.argv[3], xml);
console.log('Generated JUnit report: ' + total + ' checks, ' + failures + ' failures');
Use it in the pipeline:
steps:
- script: |
npm audit --json > $(Build.ArtifactStagingDirectory)/npm-audit.json || true
node scripts/audit-to-junit.js $(Build.ArtifactStagingDirectory)/npm-audit.json $(Build.ArtifactStagingDirectory)/npm-audit-junit.xml
displayName: 'Generate audit JUnit report'
- task: PublishTestResults@2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '$(Build.ArtifactStagingDirectory)/npm-audit-junit.xml'
testRunTitle: 'Security - npm audit'
displayName: 'Publish audit results'
This makes vulnerability findings show up in the Tests tab of the build summary, alongside your regular unit tests.
Managing False Positives and Suppression Files
Every automated scanning tool produces false positives. If you do not manage them, developers will start ignoring the scans entirely. That is worse than not scanning at all because you have a false sense of security.
Suppression Strategy
- Create a dedicated suppression file for each tool (
.trivyignore,.gitleaksignore,owasp-suppressions.xml) - Require a comment explaining why the finding is suppressed
- Review suppressions in code review — suppression file changes should get the same scrutiny as code changes
- Set expiration dates — revisit suppressions quarterly
OWASP Dependency-Check suppression file example:
<?xml version="1.0" encoding="UTF-8"?>
<suppressions xmlns="https://jeremylong.github.io/DependencyCheck/dependency-suppression.1.3.xsd">
<!-- False positive: this CVE applies to the Python package, not the Node.js one -->
<suppress until="2026-06-01Z">
<notes>CVE applies to python-requests, not node request. Reviewed by Shane 2026-02-01.</notes>
<cve>CVE-2024-12345</cve>
</suppress>
<!-- Test-only dependency, not shipped to production -->
<suppress>
<notes>mocha is a devDependency only, not included in production builds.</notes>
<packageUrl regex="true">^pkg:npm/mocha@.*$</packageUrl>
<cve>CVE-2024-67890</cve>
</suppress>
</suppressions>
Scheduling Full Scans vs. Incremental Scans on PRs
Running every scan on every PR is expensive and slow. Here is how to balance thoroughness with developer experience.
PR Builds: Fast, Targeted Scans
On pull requests, run lightweight scans that complete in under 2 minutes:
# pr-pipeline.yml
trigger: none
pr:
branches:
include:
- main
- develop
steps:
- script: npm ci
displayName: 'Install dependencies'
- script: npm audit --audit-level=high
displayName: 'Quick dependency check'
continueOnError: true
- script: npx eslint --ext .js src/ --quiet
displayName: 'ESLint security analysis (errors only)'
- script: |
./gitleaks detect --source=$(Build.SourcesDirectory) --log-opts="$(System.PullRequest.TargetBranch)..HEAD" --verbose
displayName: 'Secret detection (PR commits only)'
The key optimization is --log-opts="$(System.PullRequest.TargetBranch)..HEAD" for gitleaks — this only scans commits in the PR, not the entire git history.
Scheduled Builds: Full Comprehensive Scans
Run thorough scans nightly or weekly on the main branch:
# scheduled-security-scan.yml
trigger: none
schedules:
- cron: '0 2 * * 1-5'
displayName: 'Nightly security scan (Mon-Fri 2AM)'
branches:
include:
- main
always: true
steps:
- script: npm ci
displayName: 'Install dependencies'
- script: npm audit --json > $(Build.ArtifactStagingDirectory)/npm-audit.json
displayName: 'Full npm audit'
continueOnError: true
- script: snyk test --all-projects --json-file-output=$(Build.ArtifactStagingDirectory)/snyk-full.json
displayName: 'Full Snyk analysis'
continueOnError: true
- task: dependency-check-build-task@6
inputs:
projectName: '$(Build.Repository.Name)'
scanPath: '$(Build.SourcesDirectory)'
format: 'HTML,JSON'
failOnCVSS: '7'
displayName: 'OWASP Dependency-Check (full NVD scan)'
- script: |
./gitleaks detect --source=$(Build.SourcesDirectory) --verbose
displayName: 'Full history secret detection'
- script: |
trivy image --severity CRITICAL,HIGH,MEDIUM myapp:latest
displayName: 'Full container image scan'
Consolidating Scan Results Across Tools
When you are running four or five different scanning tools, results get scattered. Here is how to consolidate them.
Build Artifacts Approach
Publish all reports as build artifacts with a consistent naming convention:
steps:
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: 'security-reports'
displayName: 'Publish all security reports'
condition: always()
Summary Script
Write a script that reads all scan outputs and produces a single summary:
// scripts/security-summary.js
var fs = require('fs');
var path = require('path');
var reportDir = process.argv[2] || './security-reports';
var summary = {
timestamp: new Date().toISOString(),
tools: {},
totals: { critical: 0, high: 0, medium: 0, low: 0 }
};
// Parse npm audit
var npmAuditPath = path.join(reportDir, 'npm-audit.json');
if (fs.existsSync(npmAuditPath)) {
var npmAudit = JSON.parse(fs.readFileSync(npmAuditPath, 'utf8'));
var vulns = (npmAudit.metadata && npmAudit.metadata.vulnerabilities) || {};
summary.tools['npm-audit'] = {
critical: vulns.critical || 0,
high: vulns.high || 0,
medium: vulns.moderate || 0,
low: vulns.low || 0
};
summary.totals.critical += vulns.critical || 0;
summary.totals.high += vulns.high || 0;
summary.totals.medium += vulns.moderate || 0;
summary.totals.low += vulns.low || 0;
}
// Parse Trivy
var trivyPath = path.join(reportDir, 'trivy-report.json');
if (fs.existsSync(trivyPath)) {
var trivy = JSON.parse(fs.readFileSync(trivyPath, 'utf8'));
var trivyCounts = { critical: 0, high: 0, medium: 0, low: 0 };
var results = trivy.Results || [];
results.forEach(function(result) {
var vulnList = result.Vulnerabilities || [];
vulnList.forEach(function(v) {
var sev = (v.Severity || '').toLowerCase();
if (trivyCounts[sev] !== undefined) {
trivyCounts[sev]++;
}
});
});
summary.tools['trivy'] = trivyCounts;
summary.totals.critical += trivyCounts.critical;
summary.totals.high += trivyCounts.high;
summary.totals.medium += trivyCounts.medium;
summary.totals.low += trivyCounts.low;
}
// Parse gitleaks
var gitleaksPath = path.join(reportDir, 'gitleaks-report.json');
if (fs.existsSync(gitleaksPath)) {
var gitleaks = JSON.parse(fs.readFileSync(gitleaksPath, 'utf8'));
var leakCount = Array.isArray(gitleaks) ? gitleaks.length : 0;
summary.tools['gitleaks'] = { secrets_found: leakCount };
if (leakCount > 0) {
summary.totals.critical += leakCount;
}
}
console.log('\n========== SECURITY SCAN SUMMARY ==========');
console.log('Timestamp:', summary.timestamp);
console.log('');
Object.keys(summary.tools).forEach(function(tool) {
console.log('[' + tool + ']');
var data = summary.tools[tool];
Object.keys(data).forEach(function(key) {
console.log(' ' + key + ': ' + data[key]);
});
console.log('');
});
console.log('TOTALS:');
console.log(' Critical: ' + summary.totals.critical);
console.log(' High: ' + summary.totals.high);
console.log(' Medium: ' + summary.totals.medium);
console.log(' Low: ' + summary.totals.low);
console.log('============================================\n');
var outputPath = path.join(reportDir, 'security-summary.json');
fs.writeFileSync(outputPath, JSON.stringify(summary, null, 2));
if (summary.totals.critical > 0) {
console.log('RESULT: FAIL - ' + summary.totals.critical + ' critical findings detected.');
process.exit(1);
}
console.log('RESULT: PASS - No critical findings.');
Complete Working Example
Here is a full YAML pipeline that ties everything together — npm audit, ESLint security rules, secret detection, and Trivy container scanning. Results are published to the pipeline and critical findings block the build.
# azure-pipelines-security.yml
trigger:
branches:
include:
- main
- develop
paths:
exclude:
- '*.md'
- docs/
pr:
branches:
include:
- main
pool:
vmImage: 'ubuntu-latest'
variables:
trivyVersion: '0.50.0'
gitleaksVersion: '8.18.0'
imageName: 'myapp'
imageTag: '$(Build.BuildId)'
isPR: ${{ eq(variables['Build.Reason'], 'PullRequest') }}
stages:
- stage: SecurityScans
displayName: 'Security Scanning'
jobs:
- job: DependencyAndCodeScans
displayName: 'Dependency & Code Scans'
steps:
- task: NodeTool@0
inputs:
versionSpec: '20.x'
displayName: 'Install Node.js'
- script: npm ci
displayName: 'Install dependencies'
# ---- npm audit ----
- script: |
mkdir -p $(Build.ArtifactStagingDirectory)/reports
npm audit --json > $(Build.ArtifactStagingDirectory)/reports/npm-audit.json 2>&1 || true
echo "## npm audit results:"
npm audit --audit-level=moderate 2>&1 || true
node scripts/audit-to-junit.js $(Build.ArtifactStagingDirectory)/reports/npm-audit.json $(Build.ArtifactStagingDirectory)/reports/npm-audit-junit.xml
displayName: 'npm audit'
continueOnError: true
- task: PublishTestResults@2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '$(Build.ArtifactStagingDirectory)/reports/npm-audit-junit.xml'
testRunTitle: 'Security - Dependency Audit'
displayName: 'Publish dependency audit results'
condition: always()
# ---- ESLint security ----
- script: |
npx eslint . --ext .js --format json --output-file $(Build.ArtifactStagingDirectory)/reports/eslint-security.json || true
npx eslint . --ext .js --format stylish || true
displayName: 'ESLint security analysis'
continueOnError: true
# ---- Secret detection ----
- script: |
curl -sSfL https://github.com/gitleaks/gitleaks/releases/download/v$(gitleaksVersion)/gitleaks_$(gitleaksVersion)_linux_x64.tar.gz | tar xz
chmod +x gitleaks
if [ "$(isPR)" = "True" ]; then
echo "PR build - scanning PR commits only"
./gitleaks detect --source=$(Build.SourcesDirectory) --log-opts="origin/main..HEAD" --report-format=json --report-path=$(Build.ArtifactStagingDirectory)/reports/gitleaks-report.json --verbose
else
echo "CI build - scanning full repository"
./gitleaks detect --source=$(Build.SourcesDirectory) --report-format=json --report-path=$(Build.ArtifactStagingDirectory)/reports/gitleaks-report.json --verbose
fi
displayName: 'Secret detection (gitleaks)'
continueOnError: true
# ---- Evaluate thresholds ----
- script: |
CRITICAL=$(npm audit --json 2>/dev/null | node -e "
var d='';
process.stdin.on('data',function(c){d+=c;});
process.stdin.on('end',function(){
var r=JSON.parse(d);
var m=r.metadata||{};
var v=m.vulnerabilities||{};
console.log(v.critical||0);
});
")
SECRETS=0
if [ -f "$(Build.ArtifactStagingDirectory)/reports/gitleaks-report.json" ]; then
SECRETS=$(node -e "
var fs=require('fs');
var d=JSON.parse(fs.readFileSync('$(Build.ArtifactStagingDirectory)/reports/gitleaks-report.json','utf8'));
console.log(Array.isArray(d)?d.length:0);
")
fi
echo "Critical vulnerabilities: $CRITICAL"
echo "Leaked secrets: $SECRETS"
GATE_FAILED=0
if [ "$CRITICAL" -gt 0 ]; then
echo "##vso[task.logissue type=error]GATE FAILED: $CRITICAL critical vulnerability(ies) found in dependencies."
GATE_FAILED=1
fi
if [ "$SECRETS" -gt 0 ]; then
echo "##vso[task.logissue type=error]GATE FAILED: $SECRETS secret(s) detected in source code."
GATE_FAILED=1
fi
if [ "$GATE_FAILED" -eq 1 ]; then
exit 1
fi
echo "All security gates passed."
displayName: 'Evaluate security gates'
# ---- Publish all reports ----
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)/reports'
artifactName: 'security-reports'
displayName: 'Publish security reports'
condition: always()
- job: ContainerScan
displayName: 'Container Image Scan'
dependsOn: []
steps:
- task: Docker@2
inputs:
command: 'build'
dockerfile: '$(Build.SourcesDirectory)/Dockerfile'
tags: '$(imageTag)'
repository: '$(imageName)'
displayName: 'Build Docker image'
- script: |
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin v$(trivyVersion)
mkdir -p $(Build.ArtifactStagingDirectory)/reports
echo "=== Full vulnerability report ==="
trivy image --format json --output $(Build.ArtifactStagingDirectory)/reports/trivy-report.json $(imageName):$(imageTag)
trivy image --format table $(imageName):$(imageTag)
echo ""
echo "=== Evaluating gate: CRITICAL and HIGH findings ==="
trivy image --severity CRITICAL,HIGH --exit-code 1 $(imageName):$(imageTag)
displayName: 'Scan image with Trivy'
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)/reports'
artifactName: 'container-security-reports'
displayName: 'Publish Trivy report'
condition: always()
This pipeline runs two jobs in parallel: one for dependency/code/secret scanning, and one for container image scanning. Both publish their results as build artifacts, and both fail the build on critical findings.
Common Issues & Troubleshooting
1. npm audit fails with ERESOLVE errors
npm ERR! code ERESOLVE
npm ERR! ERESOLVE could not resolve
npm ERR! While resolving: [email protected]
npm ERR! Found: [email protected]
npm ERR! node_modules/eslint
Cause: Version conflicts between your project dependencies and the security tooling. ESLint 9 made breaking changes to plugin APIs.
Fix: Pin the eslint-plugin-security version that is compatible with your ESLint major version. For ESLint 9, you need [email protected]. For ESLint 8, use [email protected].
npm install --save-dev [email protected]
2. Trivy scan times out downloading vulnerability database
2024-08-15T14:32:01.123Z FATAL failed to download vulnerability DB
error: db download timeout: context deadline exceeded
Cause: The Microsoft-hosted agent has limited bandwidth, and Trivy's vulnerability database is ~40MB. First-time downloads on a fresh agent can timeout.
Fix: Add --db-repository to point to a mirrored database, or cache the database across builds:
- script: |
export TRIVY_CACHE_DIR=$(Pipeline.Workspace)/.trivy-cache
mkdir -p $TRIVY_CACHE_DIR
trivy image --cache-dir $TRIVY_CACHE_DIR --timeout 10m $(imageName):$(imageTag)
displayName: 'Scan with extended timeout'
- task: Cache@2
inputs:
key: 'trivy-db | "$(Agent.OS)"'
path: '$(Pipeline.Workspace)/.trivy-cache'
displayName: 'Cache Trivy database'
3. gitleaks detects false positives in test fixtures
Finding: password123
Secret: password123
RuleID: generic-credential
File: test/fixtures/mock-users.json
Line: 8
Cause: Test fixtures often contain fake credentials that match secret detection patterns.
Fix: Create a .gitleaksignore file or use path exclusions in gitleaks configuration:
# .gitleaks.toml
[extend]
useDefault = true
[allowlist]
paths = [
'''test/fixtures/.*''',
'''\.test\.js$''',
'''__mocks__/.*'''
]
4. OWASP Dependency-Check fails with NVD API rate limiting
[ERROR] Failed to update the NVD data
org.owasp.dependencycheck.utils.DownloadFailedException:
HTTP 403 Forbidden - NVD API rate limit exceeded
Cause: Starting in late 2023, the NVD requires an API key for reliable access. Without one, you are subject to aggressive rate limiting.
Fix: Register for a free NVD API key at https://nvd.nist.gov/developers/request-an-api-key and pass it to the task:
- task: dependency-check-build-task@6
inputs:
projectName: '$(Build.Repository.Name)'
scanPath: '$(Build.SourcesDirectory)'
format: 'HTML,JSON'
failOnCVSS: '7'
additionalArguments: >-
--nvdApiKey $(NVD_API_KEY)
--nvdApiDelay 6000
displayName: 'OWASP Dependency-Check'
Store NVD_API_KEY as a secret pipeline variable or in Azure Key Vault.
5. Pipeline passes but security scan actually had errors
##[warning]Exit code 1 received from tool '/usr/bin/bash'
##[warning]continueOnError is set, continuing despite previous errors
Cause: Using continueOnError: true on scan steps means the pipeline continues even when scans fail — including when they fail due to configuration errors, not actual findings.
Fix: Separate the scan execution from the gate evaluation. Run scans with continueOnError: true to collect results, then have a separate step that evaluates the results and fails definitively:
# This step collects results (may fail, that is ok)
- script: npm audit --json > audit.json
continueOnError: true
# This step evaluates and is the actual gate (should NOT have continueOnError)
- script: |
node scripts/evaluate-audit.js audit.json
displayName: 'Security gate evaluation'
# No continueOnError here - this MUST stop the build
Best Practices
Start with one tool and expand. Do not try to add five scanning tools in one sprint. Start with
npm audit, get the team used to fixing findings, then add ESLint security rules, then secret detection, then container scanning. Each addition takes a sprint to stabilize.Never ignore critical findings. The moment you start allowing critical vulnerabilities "just this once," you have established a precedent that undermines the entire program. If a critical finding is genuinely a false positive, add it to the suppression file with a documented justification and an expiration date.
Keep scan times under 5 minutes on PR builds. Developers will disable or ignore slow scans. Use incremental scanning on PRs (scan only changed files or new commits) and reserve full scans for scheduled nightly builds.
Version-pin your scanning tools. If you install
trivy@latestorgitleaks@latestin your pipeline, a new release can break your build with zero code changes on your part. Pin versions and update them deliberately.Review suppression files in code review. Treat
.trivyignore,.gitleaksignore, andowasp-suppressions.xmlchanges with the same rigor as production code changes. Every suppression should have a justification comment and ideally an expiration date.Publish scan results as build artifacts, always. Even when scans pass, publish the reports. You will need them for audit trails, compliance evidence, and debugging when a finding suddenly appears that was not there yesterday.
Separate scan execution from gate enforcement. Run the scan tool in one step (with
continueOnError: trueso you always get a report), then evaluate the results in a separate step that fails the build. This prevents tool crashes from silently passing the gate.Use secret detection as a pre-commit hook too. Pipeline-level secret detection catches leaks before they reach production, but the secret is already in your git history. Run gitleaks as a local pre-commit hook so the secret never enters the repository in the first place.
Document your vulnerability threshold policy. Write down what severity levels block builds, what triggers warnings, and what gets tracked in dashboards. Make this a team agreement, not a pipeline configuration that nobody understands.
References
- Azure Pipelines YAML Schema Reference - Official pipeline syntax documentation
- npm audit documentation - Built-in Node.js dependency auditing
- eslint-plugin-security - ESLint rules for Node.js security
- Trivy documentation - Container and filesystem vulnerability scanner
- gitleaks - Secret detection for git repositories
- OWASP Dependency-Check - Software composition analysis tool
- Snyk CLI documentation - Developer-first security scanning
- SonarQube Azure DevOps Integration - SAST integration guide
- NVD API - National Vulnerability Database API for dependency checking
- OWASP CI/CD Security Risks - Top 10 CI/CD security risks
