Pipelines

Conditional Execution Strategies in Azure Pipelines

A practical guide to conditional execution in Azure DevOps YAML pipelines, covering expressions, runtime parameters, path-based triggers, and real-world patterns for skipping stages and controlling pipeline flow.

Conditional Execution Strategies in Azure Pipelines

Overview

Azure Pipelines executes every stage, job, and step by default, which wastes compute time and produces noise when half the pipeline is irrelevant to the change being made. Conditional execution lets you control exactly what runs and when, based on branch names, file paths, variable values, prior job results, and manual parameters. This article covers every conditional mechanism available in YAML pipelines, from basic condition properties to template expressions, with real-world patterns you can drop into your pipelines today.

Prerequisites

  • An Azure DevOps organization with at least one project and a YAML pipeline
  • Basic understanding of YAML pipeline syntax (triggers, stages, jobs, steps)
  • Familiarity with Azure DevOps predefined variables (Build.SourceBranch, Build.Reason, etc.)
  • A Git repository connected to Azure Pipelines
  • Node.js v18+ installed if running the Node.js examples locally

Condition Syntax Basics

Every stage, job, and step in a YAML pipeline supports a condition property. When the condition evaluates to false, that element is skipped. When omitted, the default condition is succeeded(), which means "run only if all previous dependencies succeeded."

The Four Built-In Status Functions

steps:
  # Runs only if all previous steps succeeded (this is the default)
  - script: echo "Everything passed"
    condition: succeeded()

  # Runs only if a previous step failed
  - script: echo "Something broke"
    condition: failed()

  # Runs regardless of the outcome — success, failure, or cancellation
  - script: echo "This always runs"
    condition: always()

  # Runs only if the pipeline was canceled
  - script: echo "Pipeline was canceled"
    condition: canceled()

The always() condition is essential for cleanup tasks. If you need to tear down a test database or release a cloud resource, wrap it in always(). Without it, a failure in an earlier step means your cleanup never runs and you leak resources.

The failed() condition is what you use for notifications. Send a Slack message, open a PagerDuty incident, or write to a Teams channel — but only when something actually broke.

Applying Conditions at Different Levels

Conditions work at the stage, job, and step level, and they behave slightly differently at each:

stages:
  - stage: Deploy
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
    jobs:
      - job: DeployProd
        condition: succeeded()
        steps:
          - script: echo "Deploying"
            condition: succeeded()

Important: A stage-level condition does not automatically propagate to its jobs. If the stage condition is succeeded() and the stage runs, each job within that stage still evaluates its own condition independently. This means you can have a stage that runs but individual jobs within it that skip based on their own conditions.


Custom Conditions with Expressions

The real power of conditions comes from combining status functions with comparison operators and variable references.

Comparison Operators

Azure Pipelines supports eq, ne, gt, lt, ge, le, startsWith, endsWith, contains, in, notIn, and containsValue.

steps:
  # Run only on the main branch
  - script: npm run deploy
    condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')

  # Run on any release branch
  - script: npm run deploy:staging
    condition: startsWith(variables['Build.SourceBranch'], 'refs/heads/release/')

  # Run only for pull request builds
  - script: npm run lint:strict
    condition: eq(variables['Build.Reason'], 'PullRequest')

  # Skip for a specific repository
  - script: npm run integration-tests
    condition: ne(variables['Build.Repository.Name'], 'legacy-monolith')

Logical Operators: and, or, not

You can combine conditions using and(), or(), and not():

steps:
  # Run on main branch AND only if previous steps succeeded
  - script: npm run deploy:production
    displayName: 'Deploy to Production'
    condition: |
      and(
        succeeded(),
        eq(variables['Build.SourceBranch'], 'refs/heads/main'),
        ne(variables['Build.Reason'], 'Schedule')
      )

  # Run on main OR any release branch
  - script: npm run build:optimized
    displayName: 'Optimized Build'
    condition: |
      or(
        eq(variables['Build.SourceBranch'], 'refs/heads/main'),
        startsWith(variables['Build.SourceBranch'], 'refs/heads/release/')
      )

  # Run on anything except the docs branch
  - script: npm test
    displayName: 'Run Tests'
    condition: |
      not(
        startsWith(variables['Build.SourceBranch'], 'refs/heads/docs/')
      )

A common mistake: Forgetting that condition completely replaces the default succeeded(). If you write condition: eq(variables['Build.SourceBranch'], 'refs/heads/main') without wrapping it in and(succeeded(), ...), the step will run on the main branch even if a previous step failed. Always include succeeded() in your condition unless you explicitly want to run after failures.


Variable-Based Conditions

Variables are the most flexible way to control pipeline flow. You can set them statically, compute them in scripts, pass them from job to job, or inject them at queue time.

Using Pipeline Variables

variables:
  runIntegrationTests: 'true'
  targetEnvironment: 'staging'

steps:
  - script: npm run test:integration
    displayName: 'Integration Tests'
    condition: eq(variables['runIntegrationTests'], 'true')

  - script: npm run deploy
    displayName: 'Deploy to Staging'
    condition: eq(variables['targetEnvironment'], 'staging')

Setting Variables Dynamically with Script Output

This is one of the most powerful patterns. A script can inspect the commit, check file changes, query an API, and then set a variable that controls downstream execution.

steps:
  - script: |
      CHANGED_FILES=$(git diff --name-only HEAD~1 HEAD)
      echo "Changed files: $CHANGED_FILES"

      if echo "$CHANGED_FILES" | grep -qE '\.(js|ts|json)$'; then
        echo "##vso[task.setvariable variable=hasCodeChanges;isOutput=true]true"
      else
        echo "##vso[task.setvariable variable=hasCodeChanges;isOutput=true]false"
      fi
    name: detectChanges
    displayName: 'Detect Code Changes'

  - script: npm test
    displayName: 'Run Tests'
    condition: eq(variables['detectChanges.hasCodeChanges'], 'true')

The ##vso[task.setvariable] logging command is how you set variables from within a script. The isOutput=true flag makes the variable available to other jobs and stages, not just later steps in the same job.

Passing Variables Between Jobs

When you need a variable from one job in another, you must use output variables with the dependencies syntax:

jobs:
  - job: Analyze
    steps:
      - script: |
          echo "##vso[task.setvariable variable=shouldDeploy;isOutput=true]true"
        name: analysis

  - job: Deploy
    dependsOn: Analyze
    condition: eq(dependencies.Analyze.outputs['analysis.shouldDeploy'], 'true')
    steps:
      - script: echo "Deploying because analysis said so"

Passing Variables Between Stages

Cross-stage variable passing adds another level of nesting:

stages:
  - stage: Build
    jobs:
      - job: BuildJob
        steps:
          - script: |
              echo "##vso[task.setvariable variable=imageTag;isOutput=true]$(Build.BuildId)"
            name: setVars

  - stage: Deploy
    dependsOn: Build
    variables:
      imageTag: $[ stageDependencies.Build.BuildJob.outputs['setVars.imageTag'] ]
    jobs:
      - job: DeployJob
        steps:
          - script: echo "Deploying image tag $(imageTag)"

Notice the syntax difference: cross-stage references use stageDependencies instead of dependencies, and they use the $[ ] runtime expression syntax in the variables block.


Runtime Parameters for Manual Control

Parameters let you define inputs that can be set when manually queuing a pipeline. They provide type safety, validation, and default values — things that plain variables cannot do.

parameters:
  - name: environment
    displayName: 'Target Environment'
    type: string
    default: 'staging'
    values:
      - staging
      - production

  - name: runSecurityScan
    displayName: 'Run Security Scan'
    type: boolean
    default: false

  - name: deployRegions
    displayName: 'Deploy Regions'
    type: object
    default:
      - eastus
      - westus2

stages:
  - stage: SecurityScan
    condition: eq('${{ parameters.runSecurityScan }}', 'true')
    jobs:
      - job: Scan
        steps:
          - script: npm audit --production
            displayName: 'npm audit'

  - stage: Deploy
    jobs:
      - ${{ each region in parameters.deployRegions }}:
        - job: Deploy_${{ region }}
          displayName: 'Deploy to ${{ region }}'
          steps:
            - script: echo "Deploying to ${{ region }}"

Parameters are evaluated at compile time (before the pipeline runs), which means they cannot reference runtime variables. This is the key distinction between parameters and variables, and it is the source of most confusion.


Template Expressions vs Runtime Expressions

Azure Pipelines has two expression syntaxes that look similar but behave very differently. Mixing them up is the number one source of "why isn't my condition working" questions.

Template Expressions: ${{ }}

Evaluated at compile time, before the pipeline starts. The YAML is processed, the expressions are resolved, and the resulting YAML is what actually runs. Template expressions can add or remove entire jobs, steps, and stages.

parameters:
  - name: includeE2E
    type: boolean
    default: false

steps:
  - script: npm run unit-tests
    displayName: 'Unit Tests'

  # This step is physically removed from the YAML if includeE2E is false
  ${{ if eq(parameters.includeE2E, true) }}:
    - script: npm run e2e
      displayName: 'E2E Tests'

Runtime Expressions: $[ ]

Evaluated at runtime, after the pipeline has started. They can reference variables that are set during the pipeline run, including output variables from previous steps.

variables:
  isMain: $[eq(variables['Build.SourceBranch'], 'refs/heads/main')]

steps:
  - script: echo "isMain = $(isMain)"

Condition Expressions (Inline)

The condition property uses its own expression syntax — no delimiters, just the expression directly:

steps:
  - script: echo "deploying"
    condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')

When to Use Which

Syntax Evaluated Can Add/Remove YAML Elements Can Reference Runtime Variables
${{ }} Compile time Yes No
$[ ] Runtime No Yes
condition: Runtime No (skips, does not remove) Yes

The practical difference: ${{ if }} physically removes the YAML element so it never appears in the pipeline run log. A condition that evaluates to false shows the step as "skipped" in the log. Use ${{ if }} for parameters and templates; use condition for variables and dynamic behavior.


Conditional Variable Assignment

You can conditionally set variables using both template and runtime expressions:

variables:
  # Template expression — resolved before the pipeline runs
  ${{ if eq(variables['Build.SourceBranch'], 'refs/heads/main') }}:
    deployTarget: 'production'
    nodeEnv: 'production'
  ${{ else }}:
    deployTarget: 'staging'
    nodeEnv: 'development'

  # Runtime expression — resolved during the pipeline run
  isRelease: $[startsWith(variables['Build.SourceBranch'], 'refs/heads/release/')]

Gotcha: Template expressions for branch-based variable assignment do not work well with PR builds. During a PR build, Build.SourceBranch is refs/pull/123/merge, not the target branch. If you need to conditionally set variables based on the target branch of a PR, use System.PullRequest.TargetBranch instead, and use a runtime expression or a condition since the PR variables may not be available at compile time for all trigger types.


Path-Based Triggers to Skip Irrelevant Stages

Path filters on triggers are the first line of defense. If only documentation files changed, why run the build at all?

trigger:
  branches:
    include:
      - main
      - release/*
  paths:
    exclude:
      - docs/**
      - '*.md'
      - .github/**

pr:
  branches:
    include:
      - main
  paths:
    exclude:
      - docs/**
      - '*.md'

This prevents the pipeline from triggering at all when only excluded paths are modified. But there is a limitation: path filters are all-or-nothing at the pipeline level. You cannot use path filters to skip specific stages while running others.

For finer-grained control, combine path filters with dynamic variables:

stages:
  - stage: Detect
    jobs:
      - job: DetectChanges
        steps:
          - checkout: self
            fetchDepth: 2
          - script: |
              CODE_CHANGES=$(git diff --name-only HEAD~1 HEAD | grep -cE '\.(js|ts|json|pug|css)$' || true)
              DOC_CHANGES=$(git diff --name-only HEAD~1 HEAD | grep -cE '\.(md|txt|rst)$' || true)

              if [ "$CODE_CHANGES" -gt 0 ]; then
                echo "##vso[task.setvariable variable=hasCodeChanges;isOutput=true]true"
              else
                echo "##vso[task.setvariable variable=hasCodeChanges;isOutput=true]false"
              fi

              if [ "$DOC_CHANGES" -gt 0 ] && [ "$CODE_CHANGES" -eq 0 ]; then
                echo "##vso[task.setvariable variable=docsOnly;isOutput=true]true"
              else
                echo "##vso[task.setvariable variable=docsOnly;isOutput=true]false"
              fi
            name: changes
            displayName: 'Classify Changes'

  - stage: Build
    dependsOn: Detect
    condition: |
      eq(stageDependencies.Detect.DetectChanges.outputs['changes.hasCodeChanges'], 'true')
    jobs:
      - job: BuildApp
        steps:
          - script: npm ci && npm run build

Using dependencies and Job/Stage Results

The dependencies object gives you access to the result of any upstream job or stage. This is how you build conditional fan-out patterns.

stages:
  - stage: Build
    jobs:
      - job: BuildAPI
        steps:
          - script: echo "Building API"
      - job: BuildUI
        steps:
          - script: echo "Building UI"

  - stage: DeployAPI
    dependsOn: Build
    condition: |
      in(dependencies.Build.result, 'Succeeded', 'SucceededWithIssues')
    jobs:
      - job: Deploy
        steps:
          - script: echo "Deploying API"

  - stage: Notify
    dependsOn:
      - DeployAPI
    condition: |
      or(
        failed(),
        eq(dependencies.DeployAPI.result, 'Failed')
      )
    jobs:
      - job: SendAlert
        steps:
          - script: |
              curl -X POST "$(SLACK_WEBHOOK)" \
                -H "Content-Type: application/json" \
                -d '{"text":"Pipeline failed for $(Build.SourceBranch)"}'

Within a stage, you can also check individual job results:

jobs:
  - job: UnitTests
    steps:
      - script: npm test

  - job: IntegrationTests
    steps:
      - script: npm run test:integration

  - job: Report
    dependsOn:
      - UnitTests
      - IntegrationTests
    condition: always()
    steps:
      - script: |
          echo "Unit tests: $(dependencies.UnitTests.result)"
          echo "Integration tests: $(dependencies.IntegrationTests.result)"

The valid result values are Succeeded, SucceededWithIssues, Failed, Canceled, and Skipped.


Conditional Task Insertion with Template Expressions

Template expressions with ${{ if }} let you conditionally insert or remove entire steps, jobs, or stages from the compiled YAML. This is different from condition — the element does not exist at all in the pipeline run.

parameters:
  - name: runCodeCoverage
    type: boolean
    default: true

steps:
  - script: npm ci
    displayName: 'Install Dependencies'

  - script: npm test
    displayName: 'Run Tests'

  ${{ if eq(parameters.runCodeCoverage, true) }}:
    - script: npm run coverage
      displayName: 'Generate Coverage Report'
    - task: PublishCodeCoverageResults@2
      inputs:
        summaryFileLocation: '$(System.DefaultWorkingDirectory)/coverage/cobertura-coverage.xml'
      displayName: 'Publish Coverage'

Conditional Insertion in Templates

This pattern shines when combined with reusable templates:

# templates/deploy-steps.yml
parameters:
  - name: environment
    type: string
  - name: runSmoke
    type: boolean
    default: true

steps:
  - script: npm run deploy -- --env ${{ parameters.environment }}
    displayName: 'Deploy to ${{ parameters.environment }}'

  ${{ if eq(parameters.runSmoke, true) }}:
    - script: npm run smoke-test -- --target ${{ parameters.environment }}
      displayName: 'Smoke Test ${{ parameters.environment }}'
# azure-pipelines.yml
stages:
  - stage: DeployStaging
    jobs:
      - job: Deploy
        steps:
          - template: templates/deploy-steps.yml
            parameters:
              environment: staging
              runSmoke: true

  - stage: DeployProd
    jobs:
      - job: Deploy
        steps:
          - template: templates/deploy-steps.yml
            parameters:
              environment: production
              runSmoke: false  # Skip smoke tests in prod — use canary instead

Real-World Patterns

Pattern 1: Skip Deployment on Documentation-Only Changes

stages:
  - stage: Analyze
    jobs:
      - job: CheckFiles
        steps:
          - checkout: self
            fetchDepth: 2
          - script: |
              FILES=$(git diff --name-only HEAD~1 HEAD)
              echo "Changed files:"
              echo "$FILES"

              NON_DOC_FILES=$(echo "$FILES" | grep -cvE '\.(md|txt|rst|adoc)$' || true)

              if [ "$NON_DOC_FILES" -eq 0 ]; then
                echo "##vso[task.setvariable variable=skipDeploy;isOutput=true]true"
                echo "Documentation-only change. Skipping deployment."
              else
                echo "##vso[task.setvariable variable=skipDeploy;isOutput=true]false"
                echo "Code changes detected. Deployment will proceed."
              fi
            name: fileCheck

  - stage: Build
    dependsOn: Analyze
    condition: |
      and(
        succeeded(),
        ne(stageDependencies.Analyze.CheckFiles.outputs['fileCheck.skipDeploy'], 'true')
      )
    jobs:
      - job: Build
        steps:
          - script: npm ci && npm run build

  - stage: Deploy
    dependsOn: Build
    condition: succeeded()
    jobs:
      - job: DeployApp
        steps:
          - script: npm run deploy

Pattern 2: Security Scan Only on Release Branches

stages:
  - stage: SecurityScan
    condition: |
      or(
        eq(variables['Build.SourceBranch'], 'refs/heads/main'),
        startsWith(variables['Build.SourceBranch'], 'refs/heads/release/')
      )
    jobs:
      - job: DependencyAudit
        steps:
          - script: |
              npm audit --production --audit-level=high
              if [ $? -ne 0 ]; then
                echo "##vso[task.logissue type=error]High or critical vulnerabilities found"
                exit 1
              fi
            displayName: 'npm Audit'

      - job: LicenseCheck
        steps:
          - script: |
              npx license-checker --production --failOn "GPL-3.0;AGPL-3.0"
            displayName: 'License Compliance Check'

Pattern 3: Conditional Notifications

stages:
  - stage: Build
    jobs:
      - job: BuildAndTest
        steps:
          - script: npm ci && npm test && npm run build

  - stage: Deploy
    dependsOn: Build
    condition: |
      and(
        succeeded(),
        eq(variables['Build.SourceBranch'], 'refs/heads/main')
      )
    jobs:
      - job: DeployProd
        steps:
          - script: npm run deploy

  - stage: Notify
    dependsOn:
      - Build
      - Deploy
    condition: always()
    jobs:
      - job: SlackNotification
        variables:
          buildResult: $[stageDependencies.Build.BuildAndTest.result]
          deployResult: $[coalesce(stageDependencies.Deploy.DeployProd.result, 'Skipped')]
        steps:
          - script: |
              if [ "$(buildResult)" == "Failed" ] || [ "$(deployResult)" == "Failed" ]; then
                STATUS="FAILED"
                COLOR="#FF0000"
              elif [ "$(deployResult)" == "Skipped" ]; then
                STATUS="BUILD ONLY"
                COLOR="#FFAA00"
              else
                STATUS="SUCCESS"
                COLOR="#00FF00"
              fi

              curl -X POST "$(SLACK_WEBHOOK_URL)" \
                -H "Content-Type: application/json" \
                -d "{
                  \"attachments\": [{
                    \"color\": \"$COLOR\",
                    \"title\": \"Pipeline $STATUS: $(Build.DefinitionName)\",
                    \"text\": \"Branch: $(Build.SourceBranchName)\nBuild: $(buildResult)\nDeploy: $(deployResult)\",
                    \"footer\": \"$(System.TeamFoundationCollectionUri)$(System.TeamProject)/_build/results?buildId=$(Build.BuildId)\"
                  }]
                }"
            displayName: 'Send Slack Notification'

Pattern 4: Manual Override via Parameters

parameters:
  - name: forceDeployment
    displayName: 'Force deployment (bypass checks)'
    type: boolean
    default: false

  - name: skipTests
    displayName: 'Skip test suite'
    type: boolean
    default: false

stages:
  - stage: Test
    condition: ne('${{ parameters.skipTests }}', 'true')
    jobs:
      - job: RunTests
        steps:
          - script: npm test

  - stage: Deploy
    dependsOn:
      - Test
    condition: |
      or(
        eq('${{ parameters.forceDeployment }}', 'true'),
        and(
          succeeded(),
          eq(variables['Build.SourceBranch'], 'refs/heads/main')
        )
      )
    jobs:
      - job: DeployApp
        steps:
          - script: npm run deploy

Complete Working Example

Here is a full pipeline that ties together every pattern discussed above. It builds a Node.js application, detects whether only documentation changed, runs security scans on release branches, allows manual override, and sends notifications on failure.

# azure-pipelines.yml

trigger:
  branches:
    include:
      - main
      - release/*
      - feature/*
  paths:
    exclude:
      - .github/**

pr:
  branches:
    include:
      - main

parameters:
  - name: forceFullPipeline
    displayName: 'Force full pipeline (ignore path detection)'
    type: boolean
    default: false

  - name: deployTarget
    displayName: 'Deployment target'
    type: string
    default: 'auto'
    values:
      - auto
      - staging
      - production

variables:
  nodeVersion: '20.x'
  npmCacheFolder: $(Pipeline.Workspace)/.npm
  isMain: $[eq(variables['Build.SourceBranch'], 'refs/heads/main')]
  isRelease: $[startsWith(variables['Build.SourceBranch'], 'refs/heads/release/')]

stages:
  # ============================================================
  # Stage 1: Analyze what changed and set control variables
  # ============================================================
  - stage: Analyze
    displayName: 'Analyze Changes'
    jobs:
      - job: ClassifyChanges
        displayName: 'Classify Changed Files'
        pool:
          vmImage: 'ubuntu-latest'
        steps:
          - checkout: self
            fetchDepth: 2

          - script: |
              echo "Build reason: $(Build.Reason)"
              echo "Source branch: $(Build.SourceBranch)"

              if [ "$(Build.Reason)" == "Manual" ] && [ "${{ parameters.forceFullPipeline }}" == "True" ]; then
                echo "Manual run with force flag. Running full pipeline."
                echo "##vso[task.setvariable variable=docsOnly;isOutput=true]false"
                echo "##vso[task.setvariable variable=hasCodeChanges;isOutput=true]true"
                exit 0
              fi

              FILES=$(git diff --name-only HEAD~1 HEAD 2>/dev/null || echo "")

              if [ -z "$FILES" ]; then
                echo "Could not determine changed files. Running full pipeline."
                echo "##vso[task.setvariable variable=docsOnly;isOutput=true]false"
                echo "##vso[task.setvariable variable=hasCodeChanges;isOutput=true]true"
                exit 0
              fi

              echo "Changed files:"
              echo "$FILES"
              echo ""

              CODE_COUNT=$(echo "$FILES" | grep -cE '\.(js|ts|jsx|tsx|json|pug|css|scss|yml|yaml|sh)$' || true)
              DOC_COUNT=$(echo "$FILES" | grep -cE '\.(md|txt|rst|adoc)$' || true)
              CONFIG_COUNT=$(echo "$FILES" | grep -cE '(Dockerfile|\.dockerignore|\.do/|\.azure/)' || true)

              echo "Code files changed: $CODE_COUNT"
              echo "Doc files changed: $DOC_COUNT"
              echo "Config files changed: $CONFIG_COUNT"

              RELEVANT_CHANGES=$((CODE_COUNT + CONFIG_COUNT))

              if [ "$RELEVANT_CHANGES" -eq 0 ] && [ "$DOC_COUNT" -gt 0 ]; then
                echo "##vso[task.setvariable variable=docsOnly;isOutput=true]true"
                echo "##vso[task.setvariable variable=hasCodeChanges;isOutput=true]false"
                echo "Documentation-only change detected."
              else
                echo "##vso[task.setvariable variable=docsOnly;isOutput=true]false"
                echo "##vso[task.setvariable variable=hasCodeChanges;isOutput=true]true"
                echo "Code or config changes detected."
              fi
            name: detect
            displayName: 'Detect Change Types'

  # ============================================================
  # Stage 2: Build and test (skip if docs-only)
  # ============================================================
  - stage: Build
    displayName: 'Build & Test'
    dependsOn: Analyze
    condition: |
      and(
        succeeded(),
        ne(stageDependencies.Analyze.ClassifyChanges.outputs['detect.docsOnly'], 'true')
      )
    jobs:
      - job: BuildAndTest
        displayName: 'Build, Lint, and Test'
        pool:
          vmImage: 'ubuntu-latest'
        steps:
          - task: NodeTool@0
            inputs:
              versionSpec: '$(nodeVersion)'
            displayName: 'Install Node.js'

          - task: Cache@2
            inputs:
              key: 'npm | "$(Agent.OS)" | package-lock.json'
              restoreKeys: |
                npm | "$(Agent.OS)"
              path: $(npmCacheFolder)
            displayName: 'Cache npm packages'

          - script: npm ci --cache $(npmCacheFolder)
            displayName: 'Install Dependencies'

          - script: npm run lint --if-present
            displayName: 'Lint'

          - script: npm test
            displayName: 'Run Tests'

          - script: npm run build --if-present
            displayName: 'Build Application'

          - task: ArchiveFiles@2
            inputs:
              rootFolderOrFile: '$(System.DefaultWorkingDirectory)'
              includeRootFolder: false
              archiveType: 'zip'
              archiveFile: '$(Build.ArtifactStagingDirectory)/app-$(Build.BuildId).zip'
            displayName: 'Archive Build Output'

          - task: PublishPipelineArtifact@1
            inputs:
              targetPath: '$(Build.ArtifactStagingDirectory)'
              artifact: 'drop'
            displayName: 'Publish Artifact'

  # ============================================================
  # Stage 3: Security scan (main and release branches only)
  # ============================================================
  - stage: SecurityScan
    displayName: 'Security Scan'
    dependsOn: Build
    condition: |
      and(
        succeeded(),
        or(
          eq(variables['isMain'], 'true'),
          eq(variables['isRelease'], 'true')
        )
      )
    jobs:
      - job: AuditDependencies
        displayName: 'Audit Dependencies'
        pool:
          vmImage: 'ubuntu-latest'
        steps:
          - task: NodeTool@0
            inputs:
              versionSpec: '$(nodeVersion)'

          - script: npm ci
            displayName: 'Install Dependencies'

          - script: |
              echo "Running npm audit..."
              npm audit --production --audit-level=high 2>&1 | tee audit-results.txt
              AUDIT_EXIT=$?

              if [ $AUDIT_EXIT -ne 0 ]; then
                echo "##vso[task.logissue type=warning]Security vulnerabilities found. Review audit-results.txt."
              fi

              echo "##vso[task.setvariable variable=auditPassed;isOutput=true]$([ $AUDIT_EXIT -eq 0 ] && echo true || echo false)"
            name: audit
            displayName: 'npm Audit'

          - task: PublishPipelineArtifact@1
            inputs:
              targetPath: 'audit-results.txt'
              artifact: 'security-audit'
            condition: always()
            displayName: 'Publish Audit Results'

  # ============================================================
  # Stage 4: Deploy (conditional on branch and parameter)
  # ============================================================
  - stage: Deploy
    displayName: 'Deploy Application'
    dependsOn:
      - Build
      - SecurityScan
    condition: |
      and(
        not(failed('Build')),
        or(
          eq(variables['isMain'], 'true'),
          eq(variables['isRelease'], 'true')
        ),
        or(
          not(failed('SecurityScan')),
          eq(dependencies.SecurityScan.result, 'Skipped')
        )
      )
    variables:
      # Determine deployment target
      ${{ if eq(parameters.deployTarget, 'production') }}:
        resolvedTarget: 'production'
      ${{ elseif eq(parameters.deployTarget, 'staging') }}:
        resolvedTarget: 'staging'
      ${{ else }}:
        resolvedTarget: 'staging'
    jobs:
      - deployment: DeployApp
        displayName: 'Deploy to $(resolvedTarget)'
        pool:
          vmImage: 'ubuntu-latest'
        environment: '$(resolvedTarget)'
        strategy:
          runOnce:
            deploy:
              steps:
                - task: DownloadPipelineArtifact@2
                  inputs:
                    buildType: 'current'
                    artifactName: 'drop'
                    targetPath: '$(Pipeline.Workspace)/drop'

                - script: |
                    echo "Deploying to $(resolvedTarget)"
                    echo "Build ID: $(Build.BuildId)"
                    echo "Branch: $(Build.SourceBranchName)"
                    # Replace with your actual deployment commands
                    # az webapp deploy --name myapp-$(resolvedTarget) \
                    #   --resource-group rg-myapp \
                    #   --src-path $(Pipeline.Workspace)/drop/app-$(Build.BuildId).zip
                  displayName: 'Deploy Application'

                - script: |
                    echo "Running health check..."
                    # Replace with your actual health check
                    # HEALTH=$(curl -s -o /dev/null -w "%{http_code}" https://myapp-$(resolvedTarget).azurewebsites.net/health)
                    # if [ "$HEALTH" != "200" ]; then
                    #   echo "##vso[task.logissue type=error]Health check failed with status $HEALTH"
                    #   exit 1
                    # fi
                    echo "Health check passed"
                  displayName: 'Post-Deploy Health Check'

  # ============================================================
  # Stage 5: Notification (always runs, reports outcome)
  # ============================================================
  - stage: Notify
    displayName: 'Send Notifications'
    dependsOn:
      - Analyze
      - Build
      - SecurityScan
      - Deploy
    condition: |
      and(
        always(),
        or(
          failed(),
          eq(variables['isMain'], 'true')
        )
      )
    jobs:
      - job: NotifyTeam
        displayName: 'Notify Team'
        pool:
          vmImage: 'ubuntu-latest'
        variables:
          analyzeResult: $[coalesce(stageDependencies.Analyze.ClassifyChanges.result, 'Unknown')]
          buildResult: $[coalesce(stageDependencies.Build.BuildAndTest.result, 'Skipped')]
          deployResult: $[coalesce(stageDependencies.Deploy.DeployApp.result, 'Skipped')]
        steps:
          - script: |
              echo "======================================"
              echo "Pipeline Summary"
              echo "======================================"
              echo "Branch:    $(Build.SourceBranchName)"
              echo "Commit:    $(Build.SourceVersion)"
              echo "Reason:    $(Build.Reason)"
              echo "Analyze:   $(analyzeResult)"
              echo "Build:     $(buildResult)"
              echo "Deploy:    $(deployResult)"
              echo "======================================"

              # Determine overall status
              if [ "$(buildResult)" == "Failed" ] || [ "$(deployResult)" == "Failed" ]; then
                echo "##vso[task.logissue type=error]Pipeline completed with failures"
                # Send failure notification
                # curl -X POST "$(SLACK_WEBHOOK)" ...
              elif [ "$(buildResult)" == "Skipped" ]; then
                echo "Pipeline skipped (docs-only change)"
              else
                echo "Pipeline completed successfully"
              fi
            displayName: 'Pipeline Summary & Notification'

What This Pipeline Does

  1. Analyze runs first, checking git diff to determine if only documentation files changed. If the run was triggered manually with the forceFullPipeline parameter, it skips the analysis and runs everything.

  2. Build runs only if there are code changes. Documentation-only PRs skip the entire build, saving compute minutes and reducing CI queue pressure.

  3. SecurityScan runs only on main and release/* branches. Feature branches do not need dependency auditing on every push — it just slows development down without adding value.

  4. Deploy runs only on main and release/* branches, and only if the build succeeded. The security scan stage can be skipped (for non-release branches) without blocking deployment. The deployment target can be overridden with a parameter when running manually.

  5. Notify always runs. It collects results from every stage using stageDependencies and reports the outcome. On failure or on main, it sends a notification so the team knows immediately.


Common Issues and Troubleshooting

Issue 1: Condition References an Output Variable That Does Not Exist

Error message:

An expression is not allowed here. Condition contains unknown variable 'stageDependencies.Build.BuildJob.outputs[setTag.imageTag]'.

Cause: You forgot the quotes around the step name in the output reference. The correct syntax uses single quotes around the expression inside the brackets.

Fix:

# Wrong
condition: eq(stageDependencies.Build.BuildJob.outputs[setTag.imageTag], 'latest')

# Correct
condition: eq(stageDependencies.Build.BuildJob.outputs['setTag.imageTag'], 'latest')

Issue 2: Condition Runs Even When a Previous Step Failed

Error message: No error — the step just runs when you expected it to be skipped.

Cause: You wrote a condition without including succeeded(). Remember that a custom condition completely replaces the default succeeded().

Fix:

# Wrong — runs on main even if previous steps failed
condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')

# Correct — runs on main ONLY if previous steps succeeded
condition: |
  and(
    succeeded(),
    eq(variables['Build.SourceBranch'], 'refs/heads/main')
  )

Issue 3: Template Expression Cannot Access Runtime Variables

Error message:

Unexpected value ''. Expected format is a string of the form 'refs/heads/branch-name'.

Or the variable simply resolves to empty string at compile time.

Cause: You used ${{ if eq(variables['Build.SourceBranch'], 'refs/heads/main') }} expecting it to evaluate at runtime. Template expressions are evaluated at compile time, and Build.SourceBranch may not be resolved yet depending on the trigger type.

Fix: Use condition for runtime variable checks, or use ${{ if }} only with parameters:

# Wrong — template expression with runtime variable
${{ if eq(variables['Build.SourceBranch'], 'refs/heads/main') }}:
  - script: echo "This may not work as expected"

# Correct — use condition for runtime variables
- script: echo "This works correctly"
  condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')

# Correct — use template expression with parameters
${{ if eq(parameters.environment, 'production') }}:
  - script: echo "This works because parameters are compile-time"

Issue 4: Cross-Stage Variable Reference Returns Empty String

Error message: No error, but the variable value is empty.

Cause: You used dependencies instead of stageDependencies when referencing a variable from a different stage, or you forgot to set isOutput=true on the variable.

Fix:

# Wrong — 'dependencies' is for cross-job within the same stage
variables:
  myVar: $[dependencies.Build.BuildJob.outputs['step.myVar']]

# Correct — 'stageDependencies' is for cross-stage
variables:
  myVar: $[stageDependencies.Build.BuildJob.outputs['step.myVar']]

And make sure the variable was set with isOutput=true:

# Wrong
echo "##vso[task.setvariable variable=myVar]someValue"

# Correct
echo "##vso[task.setvariable variable=myVar;isOutput=true]someValue"

Issue 5: Stage Condition Using in() Fails to Parse

Error message:

/azure-pipelines.yml: Unexpected symbol: 'in'. Located at position 1 within expression: in(dependencies.Build.result, 'Succeeded', 'SucceededWithIssues')

Cause: The in() function requires the value being tested as the first argument, and it is case-sensitive.

Fix:

# Wrong — missing the value argument or incorrect casing
condition: in('Succeeded', 'SucceededWithIssues')

# Correct
condition: in(dependencies.Build.result, 'Succeeded', 'SucceededWithIssues')

Issue 6: coalesce() Returns Empty Instead of Default Value

Error message: No error, variable is just empty.

Cause: coalesce() returns the first non-null, non-empty-string value. If the stage was skipped entirely, the result may be an empty string rather than null, and coalesce() treats these differently depending on context.

Fix: Use explicit fallback logic:

variables:
  deployResult: $[coalesce(stageDependencies.Deploy.DeployJob.result, 'Skipped')]

# If that still doesn't work, handle it in the script
steps:
  - script: |
      RESULT="$(deployResult)"
      if [ -z "$RESULT" ]; then
        RESULT="Skipped"
      fi
      echo "Deploy result: $RESULT"

Best Practices

  • Always include succeeded() in custom conditions. The most common bug is writing condition: eq(variables['Build.SourceBranch'], 'refs/heads/main') and wondering why the step runs after a failure. Wrap branch or variable checks in and(succeeded(), ...) unless you explicitly want failure-tolerant behavior.

  • Use parameters for values known at queue time, variables for runtime values. Parameters give you dropdowns, type validation, and compile-time checking. Variables are for dynamic values computed during execution. Do not use variables where parameters would work — you lose validation and the pipeline UI is less helpful.

  • Prefer ${{ if }} over condition when working with templates and parameters. Template expressions remove elements entirely from the compiled YAML, which means no wasted agent allocation or confusing "skipped" entries in the log. Use condition only when you need runtime evaluation.

  • Set fetchDepth: 2 when using git diff for change detection. Azure Pipelines defaults to a shallow clone with fetchDepth: 1, which means git diff HEAD~1 HEAD will fail because there is no parent commit. Setting fetchDepth: 2 gives you exactly the commits you need without downloading the entire history.

  • Use coalesce() when referencing results from stages that might be skipped. If stage B depends on stage A and stage A is skipped, stageDependencies.A.JobName.result may be null or empty. Wrap it in coalesce(stageDependencies.A.JobName.result, 'Skipped') to avoid silent failures in your notification logic.

  • Document your conditions with displayName. A condition like and(succeeded(), ne(stageDependencies.Analyze.CheckFiles.outputs['detect.docsOnly'], 'true'), or(eq(variables['isMain'], 'true'), eq(variables['isRelease'], 'true'))) is unreadable. Give the stage a clear displayName like "Deploy (main and release only, skip docs)" so engineers can understand the pipeline without parsing expressions.

  • Test conditions with manual runs and different parameter combinations. Use the "Run pipeline" button in Azure DevOps to queue manual runs with different parameter values. This is the fastest way to verify that your conditions work correctly across all branches and scenarios.

  • Keep path-based conditions in the trigger section, not in stage conditions. If a change to docs/** should never trigger the pipeline at all, use paths.exclude in the trigger. Do not rely on a detection stage to skip everything downstream — that still consumes an agent and adds latency. Use stage-level conditions for finer-grained control within a triggered pipeline.

  • Avoid deeply nested conditions. If your condition has more than three levels of and/or/not nesting, break it into a script that sets a single boolean output variable. A 20-line bash script is easier to debug than a 20-line YAML expression.


References

Powered by Contentful