Pipeline Resources: Repositories, Containers, and Packages
Master pipeline resources in Azure DevOps including repository checkouts, container jobs, package feeds, and cross-pipeline triggers for modular CI/CD architectures.
Pipeline Resources: Repositories, Containers, and Packages
Overview
Pipeline resources are the mechanism Azure DevOps provides for declaring external dependencies that your pipeline needs -- other repositories, container images, package feeds, and even other pipelines. They let you pull in shared templates from a central repo, run your build inside a purpose-built Docker container, trigger a deployment when a new package version lands in your feed, or chain pipelines together into orchestrated workflows. If you are writing non-trivial YAML pipelines and you are not using the resources block, you are probably duplicating templates across repos, hardcoding image tags, or building brittle trigger chains with scheduled polls.
Prerequisites
- An Azure DevOps organization and project with Pipelines enabled
- Familiarity with YAML pipeline syntax (
trigger,stages,jobs,steps) - Basic understanding of Docker and container registries
- An Azure Artifacts feed or access to an external package registry (npm, NuGet, or PyPI)
- Git experience with multiple repositories
- Service connections configured for any external container registries you plan to reference
Understanding the Resources Block
The resources block sits at the top level of your YAML pipeline, alongside trigger, pool, and stages. It declares external dependencies that the pipeline agent should resolve before execution begins. There are five resource types: repositories, containers, packages, pipelines, and webhooks. Each serves a distinct purpose, and they can be combined freely.
Here is the skeleton:
resources:
repositories:
- repository: <alias>
type: git | github | githubenterprise | bitbucket
name: <project>/<repo>
ref: refs/heads/<branch>
containers:
- container: <alias>
image: <registry>/<image>:<tag>
endpoint: <service-connection>
packages:
- package: <alias>
type: npm | NuGet | PyPI
connection: <service-connection>
name: <package-name>
version: <version>
pipelines:
- pipeline: <alias>
source: <pipeline-name>
trigger:
branches:
include:
- main
The key concept is the alias. Every resource gets an alias that you reference throughout the rest of the pipeline. For repositories, you use the alias in checkout steps and template references. For containers, the alias goes in the container property of a job. For pipelines, the alias lets you access the triggering pipeline's artifacts. This indirection is what makes pipelines portable -- you change the resource definition in one place instead of hunting through dozens of steps.
Repository Resources
Repository resources are the most commonly used resource type. They let you check out code from multiple repositories in the same pipeline run, pull in shared YAML templates from a central repo, and pin to specific branches, tags, or commits.
Multi-Repo Checkout
By default, a pipeline only checks out the repository where the YAML file lives (called self). To bring in additional repos, declare them as resources and add checkout steps:
resources:
repositories:
- repository: shared-scripts
type: git
name: MyProject/shared-build-scripts
ref: refs/heads/main
- repository: config-repo
type: git
name: MyProject/environment-config
ref: refs/tags/v2.4.0
trigger:
branches:
include:
- main
pool:
vmImage: 'ubuntu-latest'
steps:
- checkout: self
path: s/app
- checkout: shared-scripts
path: s/shared
- checkout: config-repo
path: s/config
- script: |
echo "App source:"
ls $(Build.SourcesDirectory)/app/
echo "Shared scripts:"
ls $(Build.SourcesDirectory)/shared/
echo "Config files:"
ls $(Build.SourcesDirectory)/config/
displayName: 'Verify checkout layout'
When you use multiple checkout steps, the directory layout changes. Instead of dumping everything into $(Build.SourcesDirectory), each repo gets its own subdirectory. The path property controls the exact layout. Without explicit paths, repos land in directories named after their repo name, which can get messy with long names.
One thing that trips people up: when you have multiple checkout steps, the working directory for subsequent script steps is $(Build.SourcesDirectory), not the root of any particular repo. You need to explicitly cd into the directory you want, or use full paths.
Templates from Other Repositories
This is where repository resources really shine. You can maintain a central repository of YAML templates -- step templates, job templates, stage templates -- and reference them from any pipeline in your organization:
resources:
repositories:
- repository: templates
type: git
name: Platform/pipeline-templates
ref: refs/tags/v3.1.0
stages:
- stage: Build
jobs:
- template: jobs/node-build.yml@templates
parameters:
nodeVersion: '20'
workingDirectory: '$(Build.SourcesDirectory)/app'
npmScript: 'build'
- stage: Test
dependsOn: Build
jobs:
- template: jobs/node-test.yml@templates
parameters:
nodeVersion: '20'
workingDirectory: '$(Build.SourcesDirectory)/app'
coverageThreshold: 80
- stage: Deploy
dependsOn: Test
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
jobs:
- template: jobs/deploy-app-service.yml@templates
parameters:
environment: 'production'
azureSubscription: 'Production-Azure'
appName: 'my-node-api'
The @templates suffix after the file path tells Azure DevOps to look in the repository with the alias templates. This is the pattern that lets platform teams standardize CI/CD across hundreds of repositories without copy-pasting YAML into every single one.
Version Pinning and Branch References
Pinning your template repository to a tag rather than a branch is strongly recommended. If you point at refs/heads/main, any change to the templates repo immediately affects every pipeline that consumes it. That is a blast radius problem. Pin to a tag, test your pipelines, and then bump the tag when you are ready:
resources:
repositories:
# Pinned to a specific tag -- safe, predictable
- repository: templates
type: git
name: Platform/pipeline-templates
ref: refs/tags/v3.1.0
# Pinned to a specific commit -- most precise
- repository: infra-scripts
type: git
name: Platform/infrastructure-scripts
ref: refs/heads/main
# You cannot pin to a SHA directly in the ref field,
# but you can use checkout with a specific ref:
# This follows HEAD of main -- dangerous for shared templates
- repository: bleeding-edge
type: git
name: Platform/experimental-tools
ref: refs/heads/main
For GitHub repositories, the syntax changes slightly:
resources:
repositories:
- repository: oss-templates
type: github
name: my-org/pipeline-templates
endpoint: GitHub-ServiceConnection
ref: refs/tags/v2.0.0
The endpoint field is required for GitHub repos -- it references a GitHub service connection configured in your project settings.
Fetching Depth and Submodules
You can control checkout behavior directly on the checkout step, but repository resources also accept fetchDepth and lfs properties when combined with the checkout step:
steps:
- checkout: shared-scripts
fetchDepth: 1
lfs: false
submodules: false
path: s/shared
For large repositories, fetchDepth: 1 is a significant performance optimization. A full clone of a repo with 50,000 commits can take 2-3 minutes; a shallow clone finishes in seconds.
Container Resources
Container resources let you run your pipeline jobs inside Docker containers. This gives you complete control over the build environment -- exact language versions, pre-installed tools, specific OS configurations -- without relying on what Microsoft has installed on their hosted agents.
Container Jobs
The simplest use case is running an entire job inside a container:
resources:
containers:
- container: node20
image: node:20.11-bookworm
- container: node18
image: node:18.19-bookworm
jobs:
- job: BuildNode20
container: node20
steps:
- script: |
node --version
npm --version
npm ci
npm run build
npm test
displayName: 'Build and test on Node 20'
- job: BuildNode18
container: node18
steps:
- script: |
node --version
npm ci
npm run build
npm test
displayName: 'Build and test on Node 18'
When a job specifies a container, the agent pulls that image, starts a container, and runs every step inside it. The source code is mounted into the container automatically. The agent itself still runs on the host -- only your steps run inside the container.
Private Registry Authentication
For images hosted in a private registry (Azure Container Registry, AWS ECR, Docker Hub private repos), you need a service connection:
resources:
containers:
- container: build-env
image: myregistry.azurecr.io/build-tools/node-builder:20.11
endpoint: ACR-ServiceConnection
env:
NPM_TOKEN: $(npm-auth-token)
- container: scan-env
image: myregistry.azurecr.io/security/scanner:latest
endpoint: ACR-ServiceConnection
jobs:
- job: Build
container: build-env
steps:
- script: |
echo "//registry.npmjs.org/:_authToken=${NPM_TOKEN}" > .npmrc
npm ci
npm run build
displayName: 'Authenticated npm install and build'
- job: SecurityScan
dependsOn: Build
container: scan-env
steps:
- script: |
/usr/local/bin/trivy fs --severity HIGH,CRITICAL .
displayName: 'Run security scan'
The endpoint property maps to a Docker Registry service connection configured in Project Settings > Service Connections. For ACR, you can also use an Azure Resource Manager service connection.
Sidecar Containers
Sometimes your tests need a database, a message queue, or another service running alongside your code. Sidecar containers solve this without requiring you to spin up external infrastructure:
resources:
containers:
- container: node-app
image: node:20.11-bookworm
- container: postgres
image: postgres:16.1
env:
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass123
POSTGRES_DB: testdb
ports:
- 5432:5432
- container: redis
image: redis:7.2-alpine
ports:
- 6379:6379
jobs:
- job: IntegrationTests
container: node-app
services:
postgres: postgres
redis: redis
steps:
- script: |
echo "Waiting for PostgreSQL to be ready..."
for i in $(seq 1 30); do
pg_isready -h localhost -p 5432 -U testuser && break
sleep 1
done
echo "PostgreSQL is ready"
displayName: 'Wait for services'
- script: |
npm ci
DATABASE_URL="postgresql://testuser:testpass123@localhost:5432/testdb" \
REDIS_URL="redis://localhost:6379" \
npm run test:integration
displayName: 'Run integration tests'
The services property on the job maps your container aliases to sidecar instances. They start before your job's steps execute and share the same network, so your code can reach them via localhost. This is functionally similar to Docker Compose but managed entirely by the pipeline agent.
Container Resource Options
Container resources accept several properties beyond image and endpoint:
resources:
containers:
- container: custom-build
image: myregistry.azurecr.io/builders/dotnet-node:latest
endpoint: ACR-Connection
env:
BUILD_CONFIG: Release
DOTNET_CLI_TELEMETRY_OPTOUT: 1
ports:
- 8080:8080
- 8443:8443
volumes:
- /opt/build-cache:/cache
options: --memory 4g --cpus 2
mapDockerSocket: false
The options field passes raw Docker run flags to the container runtime. Use it sparingly -- it creates a tight coupling to the container runtime and can break on different agent types. The mapDockerSocket field controls whether the Docker socket is mounted into the container, which you need if your build itself creates Docker images (Docker-in-Docker).
Package Resources
Package resources let you trigger a pipeline when a new version of a package appears in a feed. This is the mechanism for building continuous deployment flows where publishing a library automatically kicks off downstream builds.
NuGet Package Resources
resources:
packages:
- package: shared-models
type: NuGet
connection: AzureArtifacts-NuGet
name: MyOrg.SharedModels
version: '3.*'
trigger: true
trigger: none # Only trigger from package updates
jobs:
- job: UpdateDependency
pool:
vmImage: 'ubuntu-latest'
steps:
- script: |
echo "Triggered by package version: $(resources.package.shared-models.version)"
echo "Package name: MyOrg.SharedModels"
displayName: 'Show trigger info'
- task: NuGetCommand@2
inputs:
command: restore
restoreSolution: '**/*.sln'
feedsToUse: 'select'
vstsFeed: 'my-org-feed'
npm Package Resources
resources:
packages:
- package: ui-components
type: npm
connection: AzureArtifacts-npm
name: '@myorg/ui-components'
version: '*'
trigger: true
jobs:
- job: RebuildApp
pool:
vmImage: 'ubuntu-latest'
steps:
- script: |
echo "New version of @myorg/ui-components detected"
echo "Version: $(resources.package.ui-components.version)"
displayName: 'Log package update'
- script: |
npm ci
npm run build
npm test
displayName: 'Rebuild with updated package'
Python Package Resources
resources:
packages:
- package: ml-utils
type: PyPI
connection: AzureArtifacts-PyPI
name: myorg-ml-utils
version: '>=2.0.0'
trigger: true
Package triggers work by polling the feed. There is a delay -- typically 5 to 15 minutes -- between publishing a new version and the trigger firing. This is not instant. If you need faster propagation, consider using a pipeline resource trigger from the publishing pipeline instead.
Pipeline Resources
Pipeline resources create explicit dependencies between pipelines. When pipeline A declares pipeline B as a resource, it can consume B's artifacts and optionally trigger when B completes.
Basic Pipeline Resource
resources:
pipelines:
- pipeline: build-pipeline
source: 'MyProject-CI'
project: MyProject
trigger: none
jobs:
- job: Deploy
pool:
vmImage: 'ubuntu-latest'
steps:
- download: build-pipeline
artifact: drop
displayName: 'Download build artifacts'
- script: |
echo "Deploying build $(resources.pipeline.build-pipeline.runID)"
echo "Source branch: $(resources.pipeline.build-pipeline.sourceBranch)"
echo "Source commit: $(resources.pipeline.build-pipeline.sourceCommit)"
ls $(Pipeline.Workspace)/build-pipeline/drop/
displayName: 'Deploy application'
The source field must match the name of the pipeline as it appears in Azure DevOps (the name property in the YAML or the display name in the UI). This is one of the most common mistakes -- people put the repo name or file name here instead of the pipeline name.
Pipeline Triggers with Filters
You can filter which completions of the upstream pipeline trigger a new run:
resources:
pipelines:
- pipeline: build-pipeline
source: 'MyProject-CI'
trigger:
branches:
include:
- main
- release/*
exclude:
- feature/*
stages:
- Build
- Test
tags:
- production-ready
The stages filter is particularly useful. It means "only trigger when these stages in the upstream pipeline complete successfully." If the upstream has Build, Test, and Scan stages but you only care about Build and Test, you can filter on those. The tags filter requires the upstream pipeline run to have been tagged with matching tags -- this is a manual or API-driven gating mechanism.
Artifact Passing Between Pipelines
When you download artifacts from a pipeline resource, they land in $(Pipeline.Workspace)/<pipeline-alias>/<artifact-name>:
resources:
pipelines:
- pipeline: api-build
source: 'API-CI'
trigger:
branches:
include:
- main
- pipeline: web-build
source: 'WebApp-CI'
trigger:
branches:
include:
- main
jobs:
- job: DeployAll
pool:
vmImage: 'ubuntu-latest'
steps:
- download: api-build
artifact: api-package
- download: web-build
artifact: web-dist
- script: |
echo "API build artifacts:"
ls -la $(Pipeline.Workspace)/api-build/api-package/
echo ""
echo "Web build artifacts:"
ls -la $(Pipeline.Workspace)/web-build/web-dist/
displayName: 'List all artifacts'
- script: |
echo "Deploying API version $(resources.pipeline.api-build.runID)"
echo "Deploying Web version $(resources.pipeline.web-build.runID)"
# Deploy both components
./deploy.sh \
--api-path "$(Pipeline.Workspace)/api-build/api-package" \
--web-path "$(Pipeline.Workspace)/web-build/web-dist"
displayName: 'Deploy all components'
This pattern is how you build fan-in deployment pipelines. Multiple CI pipelines publish their artifacts independently, and a single deployment pipeline consumes them all. The deployment pipeline triggers when any upstream completes, but you can use conditions to wait for all of them before proceeding.
Resource Triggers and Filters
Resource triggers are the glue that turns a collection of independent pipelines into an orchestrated system. Each resource type supports slightly different trigger configurations.
Repository Triggers
By default, a change to a repository resource does not trigger the pipeline. You need to explicitly enable triggers:
resources:
repositories:
- repository: config-repo
type: git
name: MyProject/app-config
ref: refs/heads/main
trigger:
branches:
include:
- main
- release/*
paths:
include:
- config/production/*
exclude:
- config/development/*
When a commit is pushed to the config-repo matching these filters, the pipeline triggers. This is how you build "config change" pipelines that automatically redeploy when configuration files are updated in a separate repository.
Combining Multiple Triggers
A pipeline can trigger from its own repo changes, repository resource changes, pipeline resource completions, and package updates -- all in the same YAML file:
trigger:
branches:
include:
- main
paths:
include:
- src/**
resources:
repositories:
- repository: shared-config
type: git
name: MyProject/shared-config
ref: refs/heads/main
trigger:
branches:
include:
- main
pipelines:
- pipeline: library-build
source: 'SharedLibrary-CI'
trigger:
branches:
include:
- main
packages:
- package: core-sdk
type: npm
connection: ArtifactsFeed
name: '@myorg/core-sdk'
trigger: true
jobs:
- job: Build
pool:
vmImage: 'ubuntu-latest'
steps:
- script: |
echo "Build reason: $(Build.Reason)"
echo "This tells you WHY the pipeline was triggered"
# ResourceTrigger = triggered by a resource
# IndividualCI = triggered by a push to self repo
# Manual = manually queued
displayName: 'Identify trigger source'
- script: |
npm ci
npm run build
npm test
displayName: 'Build and test'
The $(Build.Reason) variable is critical for understanding which trigger fired. When a pipeline resource triggers the run, Build.Reason is ResourceTrigger. You can use this in conditions to vary behavior depending on the trigger source.
Complete Working Example
Here is a realistic multi-stage pipeline that ties together repository resources for shared templates, container resources for build environments, package resources for deployment triggers, and pipeline resources for artifact consumption. This represents a Node.js API service that is part of a larger microservices architecture.
# azure-pipelines.yml -- Order Service deployment pipeline
trigger:
branches:
include:
- main
paths:
include:
- src/**
- package.json
- package-lock.json
resources:
repositories:
# Shared CI/CD templates maintained by the platform team
- repository: platform-templates
type: git
name: Platform/cicd-templates
ref: refs/tags/v4.2.0
# Environment-specific configuration files
- repository: env-config
type: git
name: MyProject/environment-config
ref: refs/heads/main
trigger:
branches:
include:
- main
paths:
include:
- services/order-service/**
containers:
# Custom build image with Node.js, build tools, and security scanners
- container: build-env
image: myregistry.azurecr.io/builders/node20-full:2024.01
endpoint: ACR-Production
# Lightweight image for running tests
- container: test-env
image: node:20.11-slim
# PostgreSQL for integration tests
- container: postgres-test
image: postgres:16.1-alpine
env:
POSTGRES_USER: ordertest
POSTGRES_PASSWORD: test_only_password
POSTGRES_DB: orders_test
ports:
- 5432:5432
# Redis for integration tests
- container: redis-test
image: redis:7.2-alpine
ports:
- 6379:6379
pipelines:
# The shared library this service depends on
- pipeline: shared-lib
source: 'SharedLibrary-CI'
trigger:
branches:
include:
- main
stages:
- Build
- Test
packages:
# Trigger rebuild when the internal SDK is updated
- package: order-sdk
type: npm
connection: AzureArtifacts-npm
name: '@myorg/order-sdk'
version: '>=2.0.0'
trigger: true
variables:
- group: OrderService-Variables
- name: nodeVersion
value: '20'
- name: buildConfiguration
value: 'production'
stages:
# Stage 1: Build inside a custom container
- stage: Build
displayName: 'Build & Package'
jobs:
- job: BuildApp
container: build-env
pool:
vmImage: 'ubuntu-latest'
steps:
- checkout: self
path: s/order-service
- checkout: env-config
path: s/config
fetchDepth: 1
- script: |
cd $(Build.SourcesDirectory)/order-service
echo "Node version: $(node --version)"
echo "npm version: $(npm --version)"
echo "Build reason: $(Build.Reason)"
npm ci --production=false
npm run build
npm run lint
echo "Build completed at $(date)"
du -sh dist/
displayName: 'Install, build, and lint'
- script: |
cd $(Build.SourcesDirectory)/order-service
mkdir -p $(Build.ArtifactStagingDirectory)/app
cp -r dist/ $(Build.ArtifactStagingDirectory)/app/
cp package.json $(Build.ArtifactStagingDirectory)/app/
cp package-lock.json $(Build.ArtifactStagingDirectory)/app/
mkdir -p $(Build.ArtifactStagingDirectory)/config
cp -r $(Build.SourcesDirectory)/config/services/order-service/ \
$(Build.ArtifactStagingDirectory)/config/
displayName: 'Stage artifacts'
- task: PublishPipelineArtifact@1
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)'
artifactName: 'order-service-$(Build.BuildId)'
displayName: 'Publish build artifact'
# Stage 2: Run tests inside containers with sidecar services
- stage: Test
displayName: 'Test'
dependsOn: Build
jobs:
- job: UnitTests
container: test-env
pool:
vmImage: 'ubuntu-latest'
steps:
- script: |
npm ci
npm run test:unit -- --reporter mocha-junit-reporter \
--reporter-options mochaFile=test-results/unit.xml
displayName: 'Run unit tests'
- task: PublishTestResults@2
inputs:
testResultsFiles: 'test-results/unit.xml'
testRunTitle: 'Unit Tests'
condition: always()
- job: IntegrationTests
container: test-env
pool:
vmImage: 'ubuntu-latest'
services:
postgres: postgres-test
redis: redis-test
steps:
- script: |
echo "Waiting for PostgreSQL..."
for i in $(seq 1 30); do
nc -z localhost 5432 && break
sleep 1
done
echo "Waiting for Redis..."
for i in $(seq 1 15); do
nc -z localhost 6379 && break
sleep 1
done
echo "All services ready"
displayName: 'Wait for sidecar services'
- script: |
npm ci
DATABASE_URL="postgresql://ordertest:test_only_password@localhost:5432/orders_test" \
REDIS_URL="redis://localhost:6379" \
npm run test:integration -- --reporter mocha-junit-reporter \
--reporter-options mochaFile=test-results/integration.xml
displayName: 'Run integration tests'
- task: PublishTestResults@2
inputs:
testResultsFiles: 'test-results/integration.xml'
testRunTitle: 'Integration Tests'
condition: always()
# Security scan using a shared template
- template: jobs/security-scan.yml@platform-templates
parameters:
scanTarget: '$(Build.SourcesDirectory)'
failOnHighSeverity: true
# Stage 3: Deploy to staging
- stage: DeployStaging
displayName: 'Deploy to Staging'
dependsOn: Test
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
jobs:
- template: jobs/deploy-app-service.yml@platform-templates
parameters:
environment: 'staging'
azureSubscription: 'Staging-Azure'
appName: 'order-service-staging'
artifactName: 'order-service-$(Build.BuildId)'
# Stage 4: Deploy to production with approval gate
- stage: DeployProduction
displayName: 'Deploy to Production'
dependsOn: DeployStaging
condition: succeeded()
jobs:
- template: jobs/deploy-app-service.yml@platform-templates
parameters:
environment: 'production'
azureSubscription: 'Production-Azure'
appName: 'order-service-prod'
artifactName: 'order-service-$(Build.BuildId)'
This pipeline demonstrates several key patterns working together:
- Platform-templates repo pinned to
v4.2.0provides reusable job templates for security scanning and deployment, ensuring consistency across all services. - env-config repo with a resource trigger means pushing a config change for this service automatically redeploys it.
- Container resources give exact control over the build environment (
build-env) and test runtime (test-env), with PostgreSQL and Redis sidecars for integration tests. - Pipeline resource from
SharedLibrary-CImeans a new library release triggers this service to rebuild. - Package resource on
@myorg/order-sdkmeans a new SDK version also triggers a rebuild.
The combination ensures that this service gets rebuilt and redeployed whenever its own code changes, its configuration changes, its shared library is updated, or its SDK dependency publishes a new version.
Consuming Artifacts from Pipeline Resources
When you use a download step to pull artifacts from a pipeline resource, there are several variables available that let you trace exactly what you deployed:
resources:
pipelines:
- pipeline: upstream
source: 'Upstream-CI'
steps:
- download: upstream
artifact: drop
- script: |
echo "Pipeline run ID: $(resources.pipeline.upstream.runID)"
echo "Pipeline name: $(resources.pipeline.upstream.pipelineName)"
echo "Source branch: $(resources.pipeline.upstream.sourceBranch)"
echo "Source commit: $(resources.pipeline.upstream.sourceCommit)"
echo "Run URL: $(resources.pipeline.upstream.runURI)"
echo ""
echo "Artifact contents:"
find $(Pipeline.Workspace)/upstream/drop -type f | head -20
displayName: 'Trace upstream build'
These variables are essential for audit trails. When someone asks "what version is deployed in production?", you can trace back through the pipeline resource metadata to the exact commit that produced the artifacts.
Writing a Node.js Script That Reads Resource Metadata
If your deployment logic is in Node.js, you can access pipeline resource metadata through environment variables. Azure DevOps exposes all resources.pipeline.* values as environment variables with dots replaced by underscores:
// scripts/deploy-info.js
var http = require('http');
var pipelineRunId = process.env.RESOURCES_PIPELINE_UPSTREAM_RUNID || 'unknown';
var sourceBranch = process.env.RESOURCES_PIPELINE_UPSTREAM_SOURCEBRANCH || 'unknown';
var sourceCommit = process.env.RESOURCES_PIPELINE_UPSTREAM_SOURCECOMMIT || 'unknown';
var buildReason = process.env.BUILD_REASON || 'unknown';
var artifactDir = process.env.PIPELINE_WORKSPACE || '/tmp';
function getDeploymentInfo() {
return {
pipelineRunId: pipelineRunId,
sourceBranch: sourceBranch,
sourceCommit: sourceCommit,
buildReason: buildReason,
deployedAt: new Date().toISOString()
};
}
function logDeployment() {
var info = getDeploymentInfo();
console.log('Deployment Info:');
console.log(' Pipeline Run: ' + info.pipelineRunId);
console.log(' Branch: ' + info.sourceBranch);
console.log(' Commit: ' + info.sourceCommit);
console.log(' Reason: ' + info.buildReason);
console.log(' Deployed: ' + info.deployedAt);
return info;
}
module.exports = {
getDeploymentInfo: getDeploymentInfo,
logDeployment: logDeployment
};
// Run directly if called from the pipeline
if (require.main === module) {
logDeployment();
}
# Usage in pipeline step
node scripts/deploy-info.js
# Output:
# Deployment Info:
# Pipeline Run: 4521
# Branch: refs/heads/main
# Commit: a3f7b2c9e1d4f6a8b0c2e4f6a8b0d2e4f6a8b0c2
# Reason: ResourceTrigger
# Deployed: 2026-02-08T14:32:17.000Z
Common Issues and Troubleshooting
1. Repository Resource Not Found
##[error] The repository MyProject/shared-templates could not be found.
Verify the name and project are correct and that the service account
has read access to the repository.
This happens when the name field in a repository resource does not match the exact project and repo name, or when the pipeline's build service account lacks read access. The fix:
- Verify the exact project and repo name in Azure DevOps (case-sensitive).
- Go to Project Settings > Repositories > the target repo > Security.
- Add the build service account (
[Project Name] Build Service) with Read permission. - For cross-project references, the target project must enable "Limit job authorization scope to referenced Azure DevOps repositories" in Pipeline Settings, or grant explicit access.
2. Container Image Pull Failure
##[error] Unable to pull image 'myregistry.azurecr.io/builders/node20:latest'.
Error: unauthorized: authentication required, visit https://aka.ms/acr/authorization
This error occurs when the service connection referenced in the endpoint field is misconfigured, expired, or does not have pull access to the registry. Troubleshooting steps:
- Verify the service connection exists in Project Settings > Service Connections.
- Click "Verify" on the Docker Registry service connection.
- Ensure the service principal has
AcrPullrole on the Azure Container Registry. - Check if the image tag actually exists --
latestmight not be what you think it is. - If using a Docker Hub rate-limited image, consider mirroring to your own ACR.
3. Pipeline Resource Trigger Not Firing
# No error message -- the pipeline simply never triggers
This is the most frustrating issue because there is no error to diagnose. The pipeline resource trigger silently fails when:
- The
sourcefield does not match the pipeline name (not the YAML file name, not the repo name -- the actual name shown in Pipelines > All Pipelines). - The upstream pipeline is in a different project and cross-project triggers are not configured.
- The branch filter excludes the branch the upstream ran on.
- The upstream pipeline completed with a partial success or cancellation (triggers require full success by default).
To debug, check the upstream pipeline's run detail page. Under "Consumed by," it should list any downstream pipelines it triggered. If nothing is listed, the trigger configuration is wrong.
4. Template Reference Fails with "Could not find template"
##[error] /jobs/node-build.yml@templates (Line: 1, Col: 1):
Could not find template 'jobs/node-build.yml' in repository 'templates'.
This means the file path does not match what exists in the referenced repository at the pinned ref. Common causes:
- The template file was renamed or moved in the templates repo, but you are pinned to an older tag that does not have the new path.
- The path is case-sensitive.
Jobs/node-build.ymlandjobs/node-build.ymlare different on Linux agents. - The
refpoints to a branch or tag where the template does not exist yet. - The file exists but has a YAML syntax error, which sometimes produces this confusing error instead of a parse error.
Fix: checkout the templates repo at the exact ref you specified and verify the file exists at that path.
5. Package Resource Version Mismatch
##[warning] Package 'my-package' version '3.1.0' does not match version
constraint '>=4.0.0'. The pipeline will not be triggered.
This warning appears in the pipeline run logs when a package update does not match your version filter. If you expect all updates to trigger the pipeline, use version: '*' instead of a specific range. Be aware that pre-release versions (like 4.0.0-beta.1) may or may not match depending on the package type's semver interpretation.
6. Multi-Checkout Working Directory Confusion
# You expect to be in your app directory, but you are in $(Build.SourcesDirectory)
$ ls
order-service/
shared-config/
# Your npm install fails because there is no package.json here
When you use multiple checkout steps, the default working directory for subsequent script steps is $(Build.SourcesDirectory), which now contains subdirectories for each repo. You must either set workingDirectory on each step or cd into the correct directory:
steps:
- checkout: self
path: s/app
- checkout: config-repo
path: s/config
# Option 1: Set workingDirectory
- script: npm ci
workingDirectory: '$(Build.SourcesDirectory)/app'
# Option 2: cd in the script
- script: |
cd $(Build.SourcesDirectory)/app
npm ci
Best Practices
Pin template repositories to tags, not branches. Using
ref: refs/tags/v4.2.0means your pipeline only changes when you deliberately bump the version. Pointing atrefs/heads/mainmeans any push to the templates repo could break every pipeline in your organization simultaneously. This is the single most important best practice for multi-repo pipelines.Use meaningful resource aliases. Name your resources after what they represent, not what they are.
build-pipelineis better thanpipeline1.postgres-testis better thandb.platform-templatesis better thanrepo2. You will reference these aliases dozens of times throughout the pipeline, and future you will thank present you for clear names.Limit container resource scope to what the job needs. Do not pull a 2GB "kitchen sink" image when your job only needs Node.js and npm. Larger images mean longer pull times (30-90 seconds for a multi-GB image vs. 5-10 seconds for an alpine-based image). Build purpose-specific images and keep them lean.
Set
trigger: noneon the main trigger when using resource triggers. If your pipeline is exclusively triggered by pipeline resources or package updates, explicitly settrigger: noneat the top level. Otherwise, pushes to the self repo will also trigger the pipeline, causing duplicate runs and confusion about what triggered what.Use
$(Build.Reason)to vary behavior by trigger source. A pipeline triggered by a code change might run the full test suite, while one triggered by a config change might only run smoke tests. Use conditions likecondition: eq(variables['Build.Reason'], 'ResourceTrigger')to implement this.Mirror external container images to your own registry. Docker Hub rate limits, GitHub Container Registry outages, and third-party registry downtime will break your pipelines at the worst possible time. Mirror the images you depend on to your own ACR or private registry. Pull times will also be faster if the registry is in the same region as your agents.
Document cross-pipeline dependencies explicitly. When pipeline A triggers pipeline B which triggers pipeline C, draw that graph somewhere. Add comments in the YAML. Maintain a diagram. Six months from now, nobody will remember why a config change in repo X triggers a deployment of service Y unless it is documented.
Test resource trigger configurations in a non-production pipeline first. Create a sandbox pipeline that references the same resources with the same triggers. Verify the triggers fire as expected before rolling the configuration into your production pipelines. Silent trigger failures are the most time-consuming issues to debug.
Use
fetchDepth: 1for repository resources you only need files from. If you are pulling a config repo or a templates repo and do not need git history, shallow clone it. This saves time and disk space, particularly on self-hosted agents with limited storage.Prefer pipeline resources over package resources for internal dependency chains. Pipeline triggers fire within seconds of the upstream completing. Package triggers rely on feed polling and can take 5-15 minutes. If your shared library's CI pipeline publishes a package, trigger downstream pipelines from the pipeline completion, not from the package appearing in the feed.
