Migrating Classic Pipelines to YAML: A Step-by-Step Guide
A complete guide to migrating Azure DevOps classic build and release pipelines to YAML, covering task mapping, variable groups, template conversion, and parallel operation during transition.
Migrating Classic Pipelines to YAML: A Step-by-Step Guide
Overview
Classic pipelines in Azure DevOps were the original way to define builds and releases through a GUI editor, but they carry serious limitations: no version control for pipeline definitions, no pull request reviews for CI/CD changes, and no portability between projects or organizations. YAML pipelines solve all of these problems by treating your pipeline configuration as code that lives alongside your application source. This guide walks through the complete process of migrating classic build and release pipelines to YAML, including task mapping, variable group connections, template conversion, and strategies for running both pipeline types in parallel during the transition.
Prerequisites
- An Azure DevOps organization and project with existing classic build and/or release pipelines
- Permissions to create YAML pipelines and edit repository files
- A Node.js application with
package.jsonand a working test suite - Basic familiarity with YAML syntax and Azure DevOps pipeline concepts
- Access to variable groups and service connections referenced by your classic pipelines
- Git command-line tools installed locally
Why Migrate from Classic to YAML
I have managed teams running dozens of classic pipelines, and the operational pain compounds fast. Here is why the migration is worth the effort.
Version Control and Auditability
Classic pipeline definitions live inside Azure DevOps as opaque JSON blobs. When someone changes a build step at 2 AM and production breaks the next morning, you have no blame history, no diff, and no way to roll back the pipeline definition itself. YAML pipelines live in your repository. Every change goes through a commit, every commit has an author, and you can git log your way back to any previous version.
# See who changed the pipeline and when
git log --oneline --follow azure-pipelines.yml
# Output:
# a3f8c21 Add staging smoke tests to deploy stage
# 7b2e4f1 Fix npm cache key to include package-lock hash
# e91c3d0 Add Node 20 to build matrix
# 1f4a8b2 Initial YAML pipeline migration from classic
Code Review for Infrastructure Changes
With YAML pipelines, a change to your CI/CD process goes through the same pull request workflow as application code. A teammate can review a pipeline change, catch a missing environment variable, or question why you removed a test step before it ever merges. Classic pipelines have no review mechanism at all.
Portability and Reuse
YAML templates can be shared across repositories and even across organizations using template references. Classic task groups are scoped to a single project and cannot be exported cleanly. If you are building a platform team that standardizes CI/CD patterns, YAML templates are the only viable path.
Microsoft's Direction
Microsoft has been investing exclusively in YAML pipelines for years. Classic release pipelines have received no new features since 2020. New capabilities like environment checks, deployment strategies, and pipeline caching are YAML-only. The writing is on the wall.
Auditing Your Existing Classic Pipeline
Before writing a single line of YAML, you need a complete inventory of what your classic pipeline actually does. Open each classic pipeline and document the following.
Build Pipeline Audit Checklist
- Triggers: Which branches trigger builds? Are there path filters? Scheduled triggers?
- Agent pool: Microsoft-hosted or self-hosted? Which VM image?
- Variables: Pipeline variables, variable groups, and any variables marked as secrets
- Tasks: Every task in order, including version numbers (e.g.,
NodeTool@0,Npm@1) - Task inputs: The specific configuration for each task (arguments, working directories, conditions)
- Artifacts: What gets published and under what name?
- Demands: Any agent demands or capabilities required?
Release Pipeline Audit Checklist
- Artifact sources: Which build pipeline artifacts are consumed?
- Stages/environments: How many stages? What order?
- Pre-deployment approvals: Who approves? Timeout settings?
- Pre-deployment gates: Azure Monitor queries, REST API checks?
- Deployment tasks: What happens in each stage?
- Variable groups: Which groups are linked? Stage-scoped variables?
- Deployment groups vs. agent pools: Are you deploying to VMs via deployment groups?
Here is a quick script to export your classic pipeline definition via the Azure DevOps REST API for reference during migration:
# Export classic build pipeline definition
ORGANIZATION="myorg"
PROJECT="myproject"
PIPELINE_ID="42"
curl -s -u ":${AZURE_DEVOPS_PAT}" \
"https://dev.azure.com/${ORGANIZATION}/${PROJECT}/_apis/build/definitions/${PIPELINE_ID}?api-version=7.1" \
| python -m json.tool > classic-build-definition.json
echo "Exported build pipeline ${PIPELINE_ID} to classic-build-definition.json"
# Export classic release pipeline definition
RELEASE_ID="8"
curl -s -u ":${AZURE_DEVOPS_PAT}" \
"https://vsrm.dev.azure.com/${ORGANIZATION}/${PROJECT}/_apis/release/definitions/${RELEASE_ID}?api-version=7.1" \
| python -m json.tool > classic-release-definition.json
echo "Exported release pipeline ${RELEASE_ID} to classic-release-definition.json"
These JSON files give you the exact task versions, inputs, and variable references you need for the conversion.
Mapping Classic Concepts to YAML Equivalents
This is the mental model that makes the migration click. Every concept in classic pipelines has a direct YAML counterpart.
| Classic Concept | YAML Equivalent | Notes |
|---|---|---|
| Build pipeline | trigger + stages (build/test) |
Merged into single file |
| Release pipeline | stages (deploy stages) |
Same file or extends template |
| Phase / Agent job | jobs within a stage |
Named jobs with pool specs |
| Task | steps (task or script) |
Same task references, e.g. task: Npm@1 |
| Task group | Template file (template:) |
Better - supports parameters |
| Pipeline variables | variables: section |
Inline or from groups |
| Variable group | variables: - group: MyGroup |
Same groups, just referenced differently |
| Artifact | PublishPipelineArtifact / DownloadPipelineArtifact |
Replaces PublishBuildArtifacts |
| Pre-deployment approval | Environment checks | Configured on the environment resource |
| Pre-deployment gate | Environment checks (invoke REST) | Or custom Azure Function checks |
| Deployment group | Environment with VM resource | Requires agent registration |
| Branch filter (trigger) | trigger.branches.include/exclude |
Plus path filters |
| Scheduled trigger | schedules: |
Cron syntax |
| Demands | pool.demands |
Same concept |
Converting Build Pipelines Step by Step
Let me walk through converting a typical Node.js classic build pipeline. Suppose your classic build pipeline has these phases:
- Use Node.js 20.x
- Run
npm ci - Run
npm test - Run
npm run build - Copy files to staging directory
- Publish build artifact
The Classic Definition (Conceptual)
In the classic editor, each of those is a task tile you drag onto the canvas. The pipeline is triggered on main, uses ubuntu-latest, and has a variable group called app-config linked.
The YAML Equivalent
trigger:
branches:
include:
- main
paths:
exclude:
- '*.md'
- docs/
pool:
vmImage: 'ubuntu-latest'
variables:
- group: app-config
- name: nodeVersion
value: '20.x'
steps:
- task: NodeTool@0
inputs:
versionSpec: '$(nodeVersion)'
displayName: 'Install Node.js $(nodeVersion)'
- script: npm ci
displayName: 'Install dependencies'
- script: npm test
displayName: 'Run tests'
- script: npm run build --if-present
displayName: 'Build application'
- task: CopyFiles@2
inputs:
sourceFolder: '$(System.DefaultWorkingDirectory)'
contents: |
**/*
!node_modules/**
!.git/**
!test/**
targetFolder: '$(Build.ArtifactStagingDirectory)'
displayName: 'Copy files to staging'
- task: PublishPipelineArtifact@1
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)'
artifact: 'drop'
displayName: 'Publish artifact'
Notice a few things. The PublishPipelineArtifact@1 task replaces the older PublishBuildArtifacts@1. The script step is shorthand for a CmdLine@2 task. You can use script for simple commands and task references for everything that needs structured inputs.
Key Conversion Rules
Task versions matter. If your classic pipeline uses Npm@1, use Npm@1 in YAML. Do not blindly upgrade to a newer version during migration. Get the pipeline working identically first, then upgrade task versions as a separate change.
Conditions translate directly. If a classic task has a "Run this task" condition like "Even if a previous task has failed", the YAML equivalent is:
- script: echo "Cleanup step"
condition: always()
displayName: 'Always run cleanup'
Other common conditions:
# Only run on main branch
condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')
# Only run if previous steps succeeded (default behavior)
condition: succeeded()
# Run even if cancelled
condition: not(canceled())
Converting Release Pipelines to Multi-Stage YAML
This is where the real complexity lives. Classic release pipelines have a visual stage-by-stage flow with artifact consumption, approvals between stages, and per-stage variable scoping. In YAML, all of this collapses into a single multi-stage pipeline definition.
Stage Dependencies
Classic release stages execute in a visual left-to-right flow. In YAML, you express this with dependsOn:
stages:
- stage: Build
jobs:
- job: BuildJob
steps:
- script: echo "Building"
- stage: DeployStaging
dependsOn: Build
jobs:
- deployment: DeployToStaging
environment: 'staging'
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying to staging"
- stage: DeployProduction
dependsOn: DeployStaging
jobs:
- deployment: DeployToProduction
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying to production"
Artifact Consumption
In classic release pipelines, artifacts are consumed automatically from the linked build. In multi-stage YAML, artifacts published in an earlier stage are automatically available in later stages through the DownloadPipelineArtifact task, or they download automatically in deployment jobs:
# Artifacts from the Build stage download automatically in deployment jobs
# They land at $(Pipeline.Workspace)/drop/
# If you need explicit control:
- task: DownloadPipelineArtifact@2
inputs:
artifact: 'drop'
path: '$(Pipeline.Workspace)/drop'
displayName: 'Download build artifact'
Approvals and Checks
Classic pre-deployment approvals are configured per-stage in the release pipeline editor. In YAML, approvals are configured on Environment resources in Azure DevOps.
- Go to Pipelines > Environments
- Create environments:
staging,production - On each environment, click Approvals and checks
- Add approval checks with designated approvers
The pipeline YAML itself does not contain approval logic. It simply targets the environment, and Azure DevOps enforces whatever checks are configured on that environment:
- deployment: DeployToProduction
environment: 'production' # Approvals configured here, not in YAML
strategy:
runOnce:
deploy:
steps:
- script: echo "This only runs after approval"
This is actually better than classic approvals because the approval policy is centralized on the environment, not scattered across individual release pipelines.
Handling Variable Groups and Library Connections
Variable groups migrate cleanly. In classic pipelines, you link variable groups to the pipeline or to specific stages. In YAML, you reference them at the pipeline level or at the stage level.
Pipeline-Level Variable Groups
variables:
- group: shared-config
- group: api-keys
- name: buildConfiguration
value: 'Release'
Stage-Scoped Variable Groups
If your classic release pipeline links different variable groups to different stages, scope them in YAML:
stages:
- stage: DeployStaging
variables:
- group: staging-config
jobs:
- deployment: Deploy
environment: 'staging'
strategy:
runOnce:
deploy:
steps:
- script: echo "DB_HOST=$(DB_HOST)"
- stage: DeployProduction
variables:
- group: production-config
jobs:
- deployment: Deploy
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- script: echo "DB_HOST=$(DB_HOST)"
The variable groups staging-config and production-config likely contain the same variable names with different values. The stage scoping ensures each deployment uses the correct configuration.
Secret Variables
Secret variables from variable groups work the same way in YAML. They are not printed in logs, not available in fork builds, and cannot be passed to scripts as environment variables unless you explicitly map them:
steps:
- script: |
node deploy.js
displayName: 'Run deployment script'
env:
DB_PASSWORD: $(DB_PASSWORD)
API_SECRET: $(API_SECRET)
Migrating Task Groups to Templates
Classic task groups are reusable collections of tasks that you insert into pipelines like a single step. YAML templates are the equivalent, and they are significantly more powerful because they support parameters with types, defaults, and validation.
Classic Task Group Example
Suppose you have a task group called "Node.js Build and Test" that:
- Installs Node.js
- Runs
npm ci - Runs
npm test - Publishes test results
YAML Template Equivalent
Create a template file called templates/nodejs-build-test.yml:
# templates/nodejs-build-test.yml
parameters:
- name: nodeVersion
type: string
default: '20.x'
- name: workingDirectory
type: string
default: '$(System.DefaultWorkingDirectory)'
- name: publishTestResults
type: boolean
default: true
steps:
- task: NodeTool@0
inputs:
versionSpec: '${{ parameters.nodeVersion }}'
displayName: 'Install Node.js ${{ parameters.nodeVersion }}'
- script: npm ci
workingDirectory: '${{ parameters.workingDirectory }}'
displayName: 'Install dependencies'
- script: npm test -- --reporter mocha-junit-reporter
workingDirectory: '${{ parameters.workingDirectory }}'
displayName: 'Run tests'
env:
MOCHA_FILE: '$(Common.TestResultsDirectory)/test-results.xml'
- ${{ if eq(parameters.publishTestResults, true) }}:
- task: PublishTestResults@2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '$(Common.TestResultsDirectory)/test-results.xml'
displayName: 'Publish test results'
condition: succeededOrFailed()
Using the Template
# azure-pipelines.yml
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
steps:
- template: templates/nodejs-build-test.yml
parameters:
nodeVersion: '20.x'
publishTestResults: true
- script: npm run build
displayName: 'Build application'
Cross-Repository Templates
If your classic task groups are shared across projects, use template references from another repository:
resources:
repositories:
- repository: templates
type: git
name: PlatformTeam/pipeline-templates
ref: refs/heads/main
steps:
- template: nodejs/build-test.yml@templates
parameters:
nodeVersion: '20.x'
This is a massive improvement over classic task groups, which are locked to a single Azure DevOps project.
Dealing with Classic-Only Features
Some classic pipeline features do not have direct YAML equivalents. Here is how to handle each one.
Deployment Groups
Classic deployment groups let you register VMs and deploy to them as a group. In YAML, the equivalent is an Environment with VM resources.
- Create an environment in Azure DevOps
- Add VM resources to the environment (each VM runs the Azure Pipelines agent)
- Use a deployment job targeting that environment:
- stage: DeployToVMs
jobs:
- deployment: DeployApp
environment:
name: 'production'
resourceType: VirtualMachine
tags: 'web-server'
strategy:
rolling:
maxParallel: 2
deploy:
steps:
- script: |
cd /var/www/myapp
tar -xzf $(Pipeline.Workspace)/drop/app.tar.gz
npm ci --production
pm2 restart myapp
displayName: 'Deploy and restart application'
The rolling strategy with maxParallel: 2 deploys to two VMs at a time, which mirrors the classic deployment group rolling deployment behavior.
Pre-Deployment Gates (Automated)
Classic release pipelines support automated gates that poll external services. In YAML environments, you achieve this with Invoke REST API or Invoke Azure Function checks.
Go to Environments > [your environment] > Approvals and checks and add:
- Invoke REST API: Call a health endpoint and check the response
- Invoke Azure Function: Run custom gate logic in a serverless function
- Query Azure Monitor alerts: Check for active alerts before deploying
For a common pattern like checking that your staging environment is healthy before deploying to production, configure an "Invoke REST API" check on the production environment:
URL: https://staging.myapp.com/health
Method: GET
Success criteria: eq(root['status'], 'healthy')
Evaluation interval: 5 minutes
Timeout: 30 minutes
Gate Workaround with Pipeline Logic
If you need gate logic inside the pipeline itself (rather than on the environment), you can use a separate job that polls an endpoint:
- stage: DeployProduction
dependsOn: DeployStaging
jobs:
- job: HealthGate
displayName: 'Verify staging health'
pool:
vmImage: 'ubuntu-latest'
steps:
- script: |
echo "Waiting for staging to stabilize..."
sleep 60
STATUS=$(curl -s -o /dev/null -w "%{http_code}" https://staging.myapp.com/health)
if [ "$STATUS" != "200" ]; then
echo "##vso[task.logissue type=error]Staging health check failed with status $STATUS"
exit 1
fi
echo "Staging is healthy, proceeding to production"
displayName: 'Health check gate'
- deployment: DeployToProd
dependsOn: HealthGate
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying to production"
Testing the Migrated Pipeline
Never switch off your classic pipeline before your YAML pipeline is proven. Follow this testing sequence.
Step 1: Validate YAML Syntax Locally
Use the Azure DevOps CLI extension to validate your YAML before pushing:
# Install the Azure DevOps CLI extension
az extension add --name azure-devops
# Validate pipeline YAML (catches syntax errors)
az pipelines run --name "my-yaml-pipeline" --branch feature/yaml-migration --what-if
# Or just check the file in the web editor:
# Pipelines > New pipeline > Existing Azure Repos Git > select your file > Review
Step 2: Run on a Feature Branch
Create your YAML pipeline but trigger it from a feature branch first:
trigger:
branches:
include:
- feature/yaml-migration
Run it multiple times. Compare outputs with your classic pipeline:
# Compare build artifacts between classic and YAML pipelines
diff <(unzip -l classic-artifact.zip | sort) <(unzip -l yaml-artifact.zip | sort)
Step 3: Verify Task Equivalence
Write a quick Node.js script to compare the tasks between your classic export and your YAML file:
var fs = require('fs');
var yaml = require('js-yaml');
var classicDef = JSON.parse(fs.readFileSync('classic-build-definition.json', 'utf8'));
var yamlDef = yaml.load(fs.readFileSync('azure-pipelines.yml', 'utf8'));
// Extract classic task list
var classicTasks = classicDef.process.phases[0].steps.map(function(step) {
return step.task.definitionType + ': ' + step.task.id + '@' + step.task.versionSpec;
});
console.log('Classic pipeline tasks:');
classicTasks.forEach(function(task, index) {
console.log(' ' + (index + 1) + '. ' + task);
});
// Extract YAML steps
var yamlSteps = yamlDef.steps || [];
if (yamlDef.stages) {
yamlDef.stages.forEach(function(stage) {
stage.jobs.forEach(function(job) {
var steps = job.steps || (job.strategy && job.strategy.runOnce && job.strategy.runOnce.deploy.steps) || [];
yamlSteps = yamlSteps.concat(steps);
});
});
}
console.log('\nYAML pipeline steps:');
yamlSteps.forEach(function(step, index) {
var name = step.task || step.script || step.bash || 'unknown';
console.log(' ' + (index + 1) + '. ' + name);
});
Classic pipeline tasks:
1. task: NodeTool@0
2. task: Npm@1
3. task: CmdLine@2
4. task: CopyFiles@2
5. task: PublishPipelineArtifact@1
YAML pipeline steps:
1. NodeTool@0
2. npm ci
3. npm test
4. CopyFiles@2
5. PublishPipelineArtifact@1
Step 4: Compare Build Outputs
Run both pipelines on the same commit and compare:
- Build duration (YAML may be slightly slower initially due to checkout step overhead)
- Artifact size and contents
- Test results and code coverage numbers
- Log output for warnings or deprecation notices
Running Classic and YAML in Parallel During Transition
This is the part most migration guides skip, and it matters a lot. You should not flip from classic to YAML overnight, especially on production-facing pipelines.
Parallel Operation Strategy
- Week 1-2: Create the YAML pipeline and run it on feature branches. Fix issues.
- Week 3-4: Run both pipelines on
main. The YAML pipeline runs but does not deploy to production. The classic pipeline remains the production deployment mechanism. - Week 5-6: Switch deployment to the YAML pipeline. Keep the classic pipeline running but disable its deployment stages.
- Week 7+: Disable the classic pipeline entirely. Delete it after 30 days of successful YAML operation.
Preventing Duplicate Deployments
When running both pipelines, make sure only one actually deploys. Use a variable or condition to control this:
# In your YAML pipeline during parallel operation
variables:
- name: isProductionDeployEnabled
value: false # Set to true when ready to cut over
stages:
- stage: Build
jobs:
- job: BuildJob
steps:
- script: npm ci && npm test && npm run build
displayName: 'Build and test'
- stage: DeployProduction
dependsOn: Build
condition: and(succeeded(), eq(variables.isProductionDeployEnabled, 'true'))
jobs:
- deployment: Deploy
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying"
When you are ready to cut over, flip isProductionDeployEnabled to true in the YAML and disable the classic release pipeline's production stage.
Naming Convention During Transition
Rename your pipelines to make the situation clear to everyone on the team:
- Classic build:
MyApp-Build (CLASSIC - DEPRECATED) - Classic release:
MyApp-Release (CLASSIC - DEPRECATED) - YAML pipeline:
MyApp-CI-CD
Complete Working Example
Here is a real-world multi-stage YAML pipeline for a Node.js application. This replaces both a classic build pipeline and a classic release pipeline with staging and production stages.
Project Structure
myapp/
app.js
package.json
package-lock.json
test/
app.test.js
templates/
nodejs-build.yml
azure-pipelines.yml
The Template: templates/nodejs-build.yml
# templates/nodejs-build.yml
parameters:
- name: nodeVersion
type: string
default: '20.x'
steps:
- task: NodeTool@0
inputs:
versionSpec: '${{ parameters.nodeVersion }}'
displayName: 'Install Node.js ${{ parameters.nodeVersion }}'
- task: Cache@2
inputs:
key: 'npm | "$(Agent.OS)" | package-lock.json'
restoreKeys: |
npm | "$(Agent.OS)"
path: '$(Pipeline.Workspace)/.npm'
displayName: 'Cache npm packages'
- script: npm ci --cache $(Pipeline.Workspace)/.npm
displayName: 'Install dependencies'
The Pipeline: azure-pipelines.yml
# azure-pipelines.yml
# Replaces: Classic Build "MyApp-Build" (ID: 42)
# Replaces: Classic Release "MyApp-Release" (ID: 8)
trigger:
branches:
include:
- main
- release/*
paths:
exclude:
- '*.md'
- docs/**
schedules:
- cron: '0 6 * * 1-5'
displayName: 'Weekday morning build'
branches:
include:
- main
always: false # Only if there are changes
variables:
- group: myapp-shared-config
- name: nodeVersion
value: '20.x'
- name: npmCacheFolder
value: '$(Pipeline.Workspace)/.npm'
stages:
# ============================================
# STAGE 1: Build and Test
# ============================================
- stage: Build
displayName: 'Build & Test'
jobs:
- job: BuildAndTest
displayName: 'Build, Lint, Test'
pool:
vmImage: 'ubuntu-latest'
steps:
- template: templates/nodejs-build.yml
parameters:
nodeVersion: '$(nodeVersion)'
- script: npm run lint --if-present
displayName: 'Run linter'
- script: npm test -- --reporter mocha-junit-reporter --reporter-options mochaFile=$(Common.TestResultsDirectory)/test-results.xml
displayName: 'Run unit tests'
- task: PublishTestResults@2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '$(Common.TestResultsDirectory)/test-results.xml'
testRunTitle: 'Unit Tests - Node $(nodeVersion)'
condition: succeededOrFailed()
displayName: 'Publish test results'
- script: |
npm run build --if-present
echo "Build completed at $(date)"
echo "Artifact contents:"
ls -la
echo "Total size: $(du -sh . --exclude=node_modules --exclude=.git | cut -f1)"
displayName: 'Build application'
- task: CopyFiles@2
inputs:
sourceFolder: '$(System.DefaultWorkingDirectory)'
contents: |
app.js
package.json
package-lock.json
routes/**
views/**
models/**
utils/**
static/**
db/**
targetFolder: '$(Build.ArtifactStagingDirectory)/app'
displayName: 'Copy application files'
- task: ArchiveFiles@2
inputs:
rootFolderOrFile: '$(Build.ArtifactStagingDirectory)/app'
includeRootFolder: false
archiveType: 'tar'
archiveFile: '$(Build.ArtifactStagingDirectory)/myapp-$(Build.BuildId).tar.gz'
replaceExistingArchive: true
displayName: 'Create deployment archive'
- script: |
echo "Archive size: $(du -h $(Build.ArtifactStagingDirectory)/myapp-$(Build.BuildId).tar.gz | cut -f1)"
displayName: 'Report artifact size'
- task: PublishPipelineArtifact@1
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)/myapp-$(Build.BuildId).tar.gz'
artifact: 'drop'
displayName: 'Publish artifact'
# ============================================
# STAGE 2: Deploy to Staging
# ============================================
- stage: DeployStaging
displayName: 'Deploy to Staging'
dependsOn: Build
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
variables:
- group: myapp-staging-config
jobs:
- deployment: DeployToStaging
displayName: 'Deploy to Staging App Service'
pool:
vmImage: 'ubuntu-latest'
environment: 'staging'
strategy:
runOnce:
deploy:
steps:
- script: |
echo "Deploying build $(Build.BuildId) to staging"
echo "Source branch: $(Build.SourceBranch)"
echo "Commit: $(Build.SourceVersion)"
ls -la $(Pipeline.Workspace)/drop/
displayName: 'Display deployment info'
- task: AzureWebApp@1
inputs:
azureSubscription: 'MyAzureServiceConnection'
appType: 'webAppLinux'
appName: '$(STAGING_APP_NAME)'
package: '$(Pipeline.Workspace)/drop/*.tar.gz'
runtimeStack: 'NODE|20-lts'
startUpCommand: 'npm start'
displayName: 'Deploy to Azure App Service (Staging)'
postRouteTraffic:
steps:
- script: |
echo "Running smoke tests against staging..."
sleep 30 # Wait for app to start
RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" https://$(STAGING_APP_NAME).azurewebsites.net/health)
if [ "$RESPONSE" != "200" ]; then
echo "##vso[task.logissue type=error]Staging smoke test failed. Health endpoint returned $RESPONSE"
exit 1
fi
echo "Staging smoke test passed (HTTP $RESPONSE)"
displayName: 'Smoke test staging deployment'
on:
failure:
steps:
- script: |
echo "##vso[task.logissue type=warning]Staging deployment failed. Check logs above."
echo "Rolling back is handled by Azure App Service deployment slots."
displayName: 'Handle staging failure'
# ============================================
# STAGE 3: Deploy to Production
# ============================================
- stage: DeployProduction
displayName: 'Deploy to Production'
dependsOn: DeployStaging
condition: succeeded()
variables:
- group: myapp-production-config
jobs:
- deployment: DeployToProduction
displayName: 'Deploy to Production App Service'
pool:
vmImage: 'ubuntu-latest'
environment: 'production' # Manual approval configured on this environment
strategy:
runOnce:
deploy:
steps:
- script: |
echo "=== PRODUCTION DEPLOYMENT ==="
echo "Build: $(Build.BuildId)"
echo "Commit: $(Build.SourceVersion)"
echo "Triggered by: $(Build.RequestedFor)"
echo "Timestamp: $(date -u)"
displayName: 'Log production deployment details'
- task: AzureWebApp@1
inputs:
azureSubscription: 'MyAzureServiceConnection'
appType: 'webAppLinux'
appName: '$(PRODUCTION_APP_NAME)'
package: '$(Pipeline.Workspace)/drop/*.tar.gz'
runtimeStack: 'NODE|20-lts'
startUpCommand: 'npm start'
deployToSlotOrASE: true
slotName: 'canary'
displayName: 'Deploy to canary slot'
- script: |
echo "Verifying canary slot health..."
sleep 45
RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" https://$(PRODUCTION_APP_NAME)-canary.azurewebsites.net/health)
if [ "$RESPONSE" != "200" ]; then
echo "##vso[task.logissue type=error]Canary health check failed (HTTP $RESPONSE). Aborting swap."
exit 1
fi
echo "Canary is healthy (HTTP $RESPONSE). Proceeding with slot swap."
displayName: 'Verify canary health'
- task: AzureAppServiceManage@0
inputs:
azureSubscription: 'MyAzureServiceConnection'
action: 'Swap Slots'
webAppName: '$(PRODUCTION_APP_NAME)'
sourceSlot: 'canary'
targetSlot: 'production'
preserveVnet: true
displayName: 'Swap canary to production'
- script: |
echo "Production deployment complete."
echo "Verifying production health..."
sleep 15
RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" https://$(PRODUCTION_APP_NAME).azurewebsites.net/health)
echo "Production health: HTTP $RESPONSE"
if [ "$RESPONSE" != "200" ]; then
echo "##vso[task.logissue type=warning]Production may be unhealthy after swap. Monitor closely."
fi
displayName: 'Post-deployment health verification'
The Health Check Endpoint
For the smoke tests in the pipeline to work, your Node.js application needs a health endpoint:
// routes/health.js
var express = require('express');
var router = express.Router();
var os = require('os');
router.get('/health', function(req, res) {
var healthcheck = {
status: 'healthy',
uptime: process.uptime(),
timestamp: new Date().toISOString(),
hostname: os.hostname(),
memory: {
used: Math.round(process.memoryUsage().heapUsed / 1024 / 1024) + 'MB',
total: Math.round(process.memoryUsage().heapTotal / 1024 / 1024) + 'MB'
},
version: require('../package.json').version
};
res.status(200).json(healthcheck);
});
module.exports = router;
Register it in your main app file:
// app.js (add this line)
var healthRouter = require('./routes/health');
app.use('/', healthRouter);
Response when curled during deployment verification:
{
"status": "healthy",
"uptime": 42.318,
"timestamp": "2026-02-08T14:32:18.442Z",
"hostname": "myapp-staging-7f8d9c4b6d-xk2mn",
"memory": {
"used": "28MB",
"total": "48MB"
},
"version": "2.4.1"
}
Common Issues and Troubleshooting
Issue 1: Variable Group Not Found
##[error]Variable group 'myapp-staging-config' is not authorized for use in pipeline 'MyApp-CI-CD'.
Cause: YAML pipelines require explicit authorization to use variable groups. Classic pipelines may have had this granted automatically.
Fix: Go to Pipelines > Library > [Variable Group] > Pipeline permissions and authorize your YAML pipeline. Alternatively, on the first run, Azure DevOps will prompt you to authorize resources. Click "View" on the failed run and grant the permissions.
Issue 2: Environment Does Not Exist
##[error]Environment production could not be found. The environment does not exist or has not been authorized for use.
Cause: Environments must be created before the pipeline references them, or the pipeline must have permission to auto-create them.
Fix: Manually create the environment in Pipelines > Environments before running the pipeline. Set up approval checks while you are there. If you want auto-creation, the first run of the pipeline will create the environment, but it will not have any approval checks configured.
Issue 3: Artifact Not Found in Deployment Stage
##[error]No artifacts were found for pipeline resource: current. Verify that the pipeline run has published artifacts.
Cause: The deployment job is looking for artifacts from the current pipeline, but the artifact name does not match or the build stage did not publish them.
Fix: Verify that the artifact name in PublishPipelineArtifact matches what the deployment stage expects. In deployment jobs, artifacts download to $(Pipeline.Workspace)/{artifact-name}/. If your build publishes to artifact name drop, your deployment references $(Pipeline.Workspace)/drop/.
# Build stage - publishes artifact named 'drop'
- task: PublishPipelineArtifact@1
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)'
artifact: 'drop' # <-- This name must match
# Deploy stage - references the same name
- script: ls $(Pipeline.Workspace)/drop/ # <-- Same name here
Issue 4: Classic Pipeline Trigger Conflict
2 pipelines triggered for push to refs/heads/main
- MyApp-Build (CLASSIC) - Build #487
- MyApp-CI-CD (YAML) - Run #23
Cause: During parallel operation, both pipelines trigger on the same branch push. If both deploy, you get duplicate deployments.
Fix: Disable the trigger on the classic build pipeline by removing its branch triggers (set to "Override YAML trigger" and clear all branches). Or disable the classic pipeline's deployment stages while keeping the build for comparison. You can also add a condition to the YAML pipeline's deploy stages using the parallel operation variable shown earlier.
Issue 5: Task Group Cannot Be Referenced in YAML
##[error]A template expression is not allowed in this context: 'taskGroups/MyTaskGroup'
Cause: Classic task groups cannot be directly referenced in YAML. They are a classic-only concept.
Fix: Convert the task group to a YAML template file. Export the task group definition from the Azure DevOps UI (open the task group, note all tasks and parameters), and recreate it as a templates/*.yml file with parameters.
Issue 6: Self-Hosted Agent Pool Not Resolving
##[error]No agent found in pool 'MyCustomPool' which satisfies the following demands: npm, node
Cause: The pool name or demands differ between classic and YAML definitions. Classic pipelines may use a display name for the pool while YAML needs the actual pool name.
Fix: Verify the exact pool name in Project Settings > Agent pools. Specify demands explicitly if needed:
pool:
name: 'MyCustomPool'
demands:
- npm
- node
- Agent.OS -equals Linux
Best Practices
Migrate build pipelines first, release pipelines second. Build pipelines are simpler and let your team build confidence with YAML syntax before tackling multi-stage deployments. Get the build green and producing identical artifacts before touching releases.
Use the Azure DevOps "Export to YAML" feature as a starting point. In the classic build editor, click the three dots on any task and select "View YAML". This gives you the exact YAML for that task with all its inputs. It is not perfect, but it saves time. This feature is not available for release pipelines, which is why release migration takes longer.
Pin task versions during migration. If your classic pipeline uses
NodeTool@0, useNodeTool@0in YAML. Do not upgrade to a newer major version during migration. Validate equivalence first, upgrade later as a separate tracked change.Create environments and configure approval checks before the first YAML deployment run. If the pipeline creates the environment automatically on its first run, it will not have any approval checks, and your code could deploy to production without review. Always pre-create environments with proper gates.
Use template files from day one, even if you only have one pipeline. Every team that starts with inline steps eventually refactors to templates. Doing it upfront is cheaper. Group related steps (build, test, deploy) into template files and call them with parameters.
Keep a migration log. Document what changed, what was removed, and any behavioral differences between the classic and YAML versions. This log is invaluable when something goes wrong two months after the migration and nobody remembers what the classic pipeline did differently.
Run both pipelines in parallel for at least two full release cycles before decommissioning classic. This catches edge cases in triggers, variable resolution, and deployment behavior that you will not find in a single test run.
Test your YAML pipeline on a feature branch with artificial trigger rules first. Do not push an untested
azure-pipelines.ymltomainwith atrigger: mainrule. You will get a surprise build and possibly a surprise deployment.Use
dependsOnandconditiontogether to control stage flow precisely. A stage withdependsOn: Buildandcondition: succeeded()will only run if Build succeeds. Add branch conditions to prevent staging/production deploys on feature branches.Commit your pipeline YAML to the repository root as
azure-pipelines.yml. This is the default filename Azure DevOps looks for. You can use a different name, but sticking with the convention makes it discoverable. Put templates in atemplates/subdirectory.
References
- Azure DevOps YAML Pipeline Schema Reference - Complete schema documentation for all YAML pipeline elements
- Migrate from Classic to YAML Pipelines - Microsoft's official migration guide
- YAML Pipeline Templates - Template syntax, parameters, and cross-repo references
- Environment Approvals and Checks - Configuring approval gates on environments
- Pipeline Artifacts - Publishing and consuming artifacts in multi-stage pipelines
- Classic to YAML Task Mapping - Task reference with YAML syntax for every built-in task
- Deployment Strategies in YAML - Rolling, canary, and runOnce deployment patterns
