title | description | services | documentationcenter | ms.service | ms.workload | author | ms.author | ms.reviewer | manager | ms.topic | ms.date |
---|---|---|---|---|---|---|---|---|---|---|---|
Continuous integration and delivery in Azure Data Factory |
Learn how to use continuous integration and delivery to move Data Factory pipelines from one environment (development, test, production) to another. |
data-factory |
data-factory |
data-services |
dcstwh |
weetok |
maghan |
jroth |
conceptual |
12/17/2020 |
[!INCLUDEappliesto-adf-xxx-md]
Continuous integration is the practice of testing each change made to your codebase automatically and as early as possible. Continuous delivery follows the testing that happens during continuous integration and pushes changes to a staging or production system.
In Azure Data Factory, continuous integration and delivery (CI/CD) means moving Data Factory pipelines from one environment (development, test, production) to another. Azure Data Factory utilizes Azure Resource Manager templates to store the configuration of your various ADF entities (pipelines, datasets, data flows, and so on). There are two suggested methods to promote a data factory to another environment:
- Automated deployment using Data Factory's integration with Azure Pipelines
- Manually upload a Resource Manager template using Data Factory UX integration with Azure Resource Manager.
[!INCLUDE updated-for-az]
Below is a sample overview of the CI/CD lifecycle in an Azure data factory that's configured with Azure Repos Git. For more information on how to configure a Git repository, see Source control in Azure Data Factory.
-
A development data factory is created and configured with Azure Repos Git. All developers should have permission to author Data Factory resources like pipelines and datasets.
-
A developer creates a feature branch to make a change. They debug their pipeline runs with their most recent changes. For more information on how to debug a pipeline run, see Iterative development and debugging with Azure Data Factory.
-
After a developer is satisfied with their changes, they create a pull request from their feature branch to the main or collaboration branch to get their changes reviewed by peers.
-
After a pull request is approved and changes are merged in the main branch, the changes get published to the development factory.
-
When the team is ready to deploy the changes to a test or UAT (User Acceptance Testing) factory, the team goes to their Azure Pipelines release and deploys the desired version of the development factory to UAT. This deployment takes place as part of an Azure Pipelines task and uses Resource Manager template parameters to apply the appropriate configuration.
-
After the changes have been verified in the test factory, deploy to the production factory by using the next task of the pipelines release.
Note
Only the development factory is associated with a git repository. The test and production factories shouldn't have a git repository associated with them and should only be updated via an Azure DevOps pipeline or via a Resource Management template.
The below image highlights the different steps of this lifecycle.
The following is a guide for setting up an Azure Pipelines release that automates the deployment of a data factory to multiple environments.
-
An Azure subscription linked to Visual Studio Team Foundation Server or Azure Repos that uses the Azure Resource Manager service endpoint.
-
A data factory configured with Azure Repos Git integration.
-
An Azure key vault that contains the secrets for each environment.
-
In Azure DevOps, open the project that's configured with your data factory.
-
On the left side of the page, select Pipelines, and then select Releases.
-
Select New pipeline, or, if you have existing pipelines, select New and then New release pipeline.
-
Select the Empty job template.
-
In the Stage name box, enter the name of your environment.
-
Select Add artifact, and then select the git repository configured with your development data factory. Select the publish branch of the repository for the Default branch. By default, this publish branch is
adf_publish
. For the Default version, select Latest from default branch. -
Add an Azure Resource Manager Deployment task:
a. In the stage view, select View stage tasks.
b. Create a new task. Search for ARM Template Deployment, and then select Add.
c. In the Deployment task, select the subscription, resource group, and location for the target data factory. Provide credentials if necessary.
d. In the Action list, select Create or update resource group.
e. Select the ellipsis button (…) next to the Template box. Browse for the Azure Resource Manager template that is generated in your publish branch of the configured git repository. Look for the file
ARMTemplateForFactory.json
in the folder of the adf_publish branch.f. Select … next to the Template parameters box to choose the parameters file. Look for the file
ARMTemplateParametersForFactory.json
in the folder of the adf_publish branch.g. Select … next to the Override template parameters box, and enter the desired parameter values for the target data factory. For credentials that come from Azure Key Vault, enter the secret's name between double quotation marks. For example, if the secret's name is cred1, enter "$(cred1)" for this value.
h. Select Incremental for the Deployment mode.
[!WARNING] In Complete deployment mode, resources that exist in the resource group but aren't specified in the new Resource Manager template will be deleted. For more information, please refer to Azure Resource Manager Deployment Modes
-
Save the release pipeline.
-
To trigger a release, select Create release. To automate the creation of releases, see Azure DevOps release triggers
Important
In CI/CD scenarios, the integration runtime (IR) type in different environments must be the same. For example, if you have a self-hosted IR in the development environment, the same IR must also be of type self-hosted in other environments, such as test and production. Similarly, if you're sharing integration runtimes across multiple stages, you have to configure the integration runtimes as linked self-hosted in all environments, such as development, test, and production.
If you have secrets to pass in an Azure Resource Manager template, we recommend that you use Azure Key Vault with the Azure Pipelines release.
There are two ways to handle secrets:
-
Add the secrets to parameters file. For more info, see Use Azure Key Vault to pass secure parameter value during deployment.
Create a copy of the parameters file that's uploaded to the publish branch. Set the values of the parameters that you want to get from Key Vault by using this format:
{ "parameters": { "azureSqlReportingDbPassword": { "reference": { "keyVault": { "id": "/subscriptions/<subId>/resourceGroups/<resourcegroupId> /providers/Microsoft.KeyVault/vaults/<vault-name> " }, "secretName": " < secret - name > " } } } }
When you use this method, the secret is pulled from the key vault automatically.
The parameters file needs to be in the publish branch as well.
-
Add an Azure Key Vault task before the Azure Resource Manager Deployment task described in the previous section:
-
On the Tasks tab, create a new task. Search for Azure Key Vault and add it.
-
In the Key Vault task, select the subscription in which you created the key vault. Provide credentials if necessary, and then select the key vault.
-
The Azure Key Vault task might fail with an Access Denied error if the correct permissions aren't set. Download the logs for the release, and locate the .ps1 file that contains the command to give permissions to the Azure Pipelines agent. You can run the command directly. Or you can copy the principal ID from the file and add the access policy manually in the Azure portal. Get
and List
are the minimum permissions required.
Deployment can fail if you try to update active triggers. To update active triggers, you need to manually stop them and then restart them after the deployment. You can do this by using an Azure PowerShell task:
-
On the Tasks tab of the release, add an Azure PowerShell task. Choose task version 4.*.
-
Select the subscription your factory is in.
-
Select Script File Path as the script type. This requires you to save your PowerShell script in your repository. The following PowerShell script can be used to stop triggers:
$triggersADF = Get-AzDataFactoryV2Trigger -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName $triggersADF | ForEach-Object { Stop-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.name -Force }
You can complete similar steps (with the Start-AzDataFactoryV2Trigger
function) to restart the triggers after deployment.
The data factory team has provided a sample pre- and post-deployment script located at the bottom of this article.
-
In the ARM Template list, select Export ARM Template to export the Resource Manager template for your data factory in the development environment.
-
In your test and production data factories, select Import ARM Template. This action takes you to the Azure portal, where you can import the exported template. Select Build your own template in the editor to open the Resource Manager template editor.
-
Select Load file, and then select the generated Resource Manager template. This is the arm_template.json file located in the .zip file exported in step 1.
-
In the settings section, enter the configuration values, like linked service credentials. When you're done, select Purchase to deploy the Resource Manager template.
If your development factory has an associated git repository, you can override the default Resource Manager template parameters of the Resource Manager template generated by publishing or exporting the template. You might want to override the default parameterization template in these scenarios:
-
You use automated CI/CD and you want to change some properties during Resource Manager deployment, but the properties aren't parameterized by default.
-
Your factory is so large that the default Resource Manager template is invalid because it has more than the maximum allowed parameters (256).
To handle custom parameter 256 limit, there are 3 options:
- Use the custom parameter file and remove properties that don't need parameterization, i.e., properties that can keep a default value and hence decrease the parameter count.
- Refactor logic in the dataflow to reduce parameters, for example, pipeline parameters all have the same value, you can just use global parameters instead.
- Split one data factory into multiple data flows.
To override the default parameterization template, go to the management hub and select Parameterization template in the source control section. Select Edit template to open the parameterization template code editor.
Creating a custom parameterization template creates a file named arm-template-parameters-definition.json in the root folder of your git branch. You must use that exact file name.
When publishing from the collaboration branch, Data Factory will read this file and use its configuration to generate which properties get parameterized. If no file is found, the default template is used.
When exporting a Resource Manager template, Data Factory reads this file from whichever branch you're currently working on, not the collaboration branch. You can create or edit the file from a private branch, where you can test your changes by selecting Export ARM Template in the UI. You can then merge the file into the collaboration branch.
Note
A custom parameterization template doesn't change the ARM template parameter limit of 256. It lets you choose and decrease the number of parameterized properties.
The following are some guidelines to follow when you create the custom parameters file, arm-template-parameters-definition.json. The file consists of a section for each entity type: trigger, pipeline, linked service, dataset, integration runtime, and data flow.
- Enter the property path under the relevant entity type.
- Setting a property name to
*
indicates that you want to parameterize all properties under it (only down to the first level, not recursively). You can also provide exceptions to this configuration. - Setting the value of a property as a string indicates that you want to parameterize the property. Use the format
<action>:<name>:<stype>
.<action>
can be one of these characters:=
means keep the current value as the default value for the parameter.-
means don't keep the default value for the parameter.|
is a special case for secrets from Azure Key Vault for connection strings or keys.
<name>
is the name of the parameter. If it's blank, it takes the name of the property. If the value starts with a-
character, the name is shortened. For example,AzureStorage1_properties_typeProperties_connectionString
would be shortened toAzureStorage1_connectionString
.<stype>
is the type of parameter. If<stype>
is blank, the default type isstring
. Supported values:string
,securestring
,int
,bool
,object
,secureobject
andarray
.
- Specifying an array in the definition file indicates that the matching property in the template is an array. Data Factory iterates through all the objects in the array by using the definition that's specified in the integration runtime object of the array. The second object, a string, becomes the name of the property, which is used as the name for the parameter for each iteration.
- A definition can't be specific to a resource instance. Any definition applies to all resources of that type.
- By default, all secure strings, like Key Vault secrets, and secure strings, like connection strings, keys, and tokens, are parameterized.
Here's an example of what a parameterization template might look like:
{
"Microsoft.DataFactory/factories/pipelines": {
"properties": {
"activities": [{
"typeProperties": {
"waitTimeInSeconds": "-::int",
"headers": "=::object"
}
}]
}
},
"Microsoft.DataFactory/factories/integrationRuntimes": {
"properties": {
"typeProperties": {
"*": "="
}
}
},
"Microsoft.DataFactory/factories/triggers": {
"properties": {
"typeProperties": {
"recurrence": {
"*": "=",
"interval": "=:triggerSuffix:int",
"frequency": "=:-freq"
},
"maxConcurrency": "="
}
}
},
"Microsoft.DataFactory/factories/linkedServices": {
"*": {
"properties": {
"typeProperties": {
"accountName": "=",
"username": "=",
"connectionString": "|:-connectionString:secureString",
"secretAccessKey": "|"
}
}
},
"AzureDataLakeStore": {
"properties": {
"typeProperties": {
"dataLakeStoreUri": "="
}
}
}
},
"Microsoft.DataFactory/factories/datasets": {
"properties": {
"typeProperties": {
"*": "="
}
}
}
}
Here's an explanation of how the preceding template is constructed, broken down by resource type.
- Any property in the path
activities/typeProperties/waitTimeInSeconds
is parameterized. Any activity in a pipeline that has a code-level property namedwaitTimeInSeconds
(for example, theWait
activity) is parameterized as a number, with a default name. But it won't have a default value in the Resource Manager template. It will be a mandatory input during the Resource Manager deployment. - Similarly, a property called
headers
(for example, in aWeb
activity) is parameterized with typeobject
(JObject). It has a default value, which is the same value as that of the source factory.
- All properties under the path
typeProperties
are parameterized with their respective default values. For example, there are two properties underIntegrationRuntimes
type properties:computeProperties
andssisProperties
. Both property types are created with their respective default values and types (Object).
- Under
typeProperties
, two properties are parameterized. The first one ismaxConcurrency
, which is specified to have a default value and is of typestring
. It has the default parameter name<entityName>_properties_typeProperties_maxConcurrency
. - The
recurrence
property also is parameterized. Under it, all properties at that level are specified to be parameterized as strings, with default values and parameter names. An exception is theinterval
property, which is parameterized as typeint
. The parameter name is suffixed with<entityName>_properties_typeProperties_recurrence_triggerSuffix
. Similarly, thefreq
property is a string and is parameterized as a string. However, thefreq
property is parameterized without a default value. The name is shortened and suffixed. For example,<entityName>_freq
.
- Linked services are unique. Because linked services and datasets have a wide range of types, you can provide type-specific customization. In this example, for all linked services of type
AzureDataLakeStore
, a specific template will be applied. For all others (via*
), a different template will be applied. - The
connectionString
property will be parameterized as asecurestring
value. It won't have a default value. It will have a shortened parameter name that's suffixed withconnectionString
. - The property
secretAccessKey
happens to be anAzureKeyVaultSecret
(for example, in an Amazon S3 linked service). It's automatically parameterized as an Azure Key Vault secret and fetched from the configured key vault. You can also parameterize the key vault itself.
- Although type-specific customization is available for datasets, you can provide configuration without explicitly having a *-level configuration. In the preceding example, all dataset properties under
typeProperties
are parameterized.
Below is the current default parameterization template. If you need to add only a few parameters, editing this template directly might be a good idea because you won't lose the existing parameterization structure.
{
"Microsoft.DataFactory/factories": {
"properties": {
"globalParameters": {
"*": {
"value": "="
}
}
},
"location": "="
},
"Microsoft.DataFactory/factories/pipelines": {
},
"Microsoft.DataFactory/factories/dataflows": {
},
"Microsoft.DataFactory/factories/integrationRuntimes":{
"properties": {
"typeProperties": {
"ssisProperties": {
"catalogInfo": {
"catalogServerEndpoint": "=",
"catalogAdminUserName": "=",
"catalogAdminPassword": {
"value": "-::secureString"
}
},
"customSetupScriptProperties": {
"sasToken": {
"value": "-::secureString"
}
}
},
"linkedInfo": {
"key": {
"value": "-::secureString"
},
"resourceId": "="
},
"computeProperties": {
"dataFlowProperties": {
"externalComputeInfo": [{
"accessToken": "-::secureString"
}
]
}
}
}
}
},
"Microsoft.DataFactory/factories/triggers": {
"properties": {
"pipelines": [{
"parameters": {
"*": "="
}
},
"pipelineReference.referenceName"
],
"pipeline": {
"parameters": {
"*": "="
}
},
"typeProperties": {
"scope": "="
}
}
},
"Microsoft.DataFactory/factories/linkedServices": {
"*": {
"properties": {
"typeProperties": {
"accountName": "=",
"username": "=",
"userName": "=",
"accessKeyId": "=",
"servicePrincipalId": "=",
"userId": "=",
"host": "=",
"clientId": "=",
"clusterUserName": "=",
"clusterSshUserName": "=",
"hostSubscriptionId": "=",
"clusterResourceGroup": "=",
"subscriptionId": "=",
"resourceGroupName": "=",
"tenant": "=",
"dataLakeStoreUri": "=",
"baseUrl": "=",
"database": "=",
"serviceEndpoint": "=",
"batchUri": "=",
"poolName": "=",
"databaseName": "=",
"systemNumber": "=",
"server": "=",
"url":"=",
"functionAppUrl":"=",
"environmentUrl": "=",
"aadResourceId": "=",
"sasUri": "|:-sasUri:secureString",
"sasToken": "|",
"connectionString": "|:-connectionString:secureString",
"hostKeyFingerprint": "="
}
}
},
"Odbc": {
"properties": {
"typeProperties": {
"userName": "=",
"connectionString": {
"secretName": "="
}
}
}
}
},
"Microsoft.DataFactory/factories/datasets": {
"*": {
"properties": {
"typeProperties": {
"folderPath": "=",
"fileName": "="
}
}
}
},
"Microsoft.DataFactory/factories/managedVirtualNetworks/managedPrivateEndpoints": {
"properties": {
"*": "="
}
}
}
The following example shows how to add a single value to the default parameterization template. We only want to add an existing Azure Databricks interactive cluster ID for a Databricks linked service to the parameters file. Note that this file is the same as the previous file except for the addition of existingClusterId
under the properties field of Microsoft.DataFactory/factories/linkedServices
.
{
"Microsoft.DataFactory/factories": {
"properties": {
"globalParameters": {
"*": {
"value": "="
}
}
},
"location": "="
},
"Microsoft.DataFactory/factories/pipelines": {
},
"Microsoft.DataFactory/factories/dataflows": {
},
"Microsoft.DataFactory/factories/integrationRuntimes":{
"properties": {
"typeProperties": {
"ssisProperties": {
"catalogInfo": {
"catalogServerEndpoint": "=",
"catalogAdminUserName": "=",
"catalogAdminPassword": {
"value": "-::secureString"
}
},
"customSetupScriptProperties": {
"sasToken": {
"value": "-::secureString"
}
}
},
"linkedInfo": {
"key": {
"value": "-::secureString"
},
"resourceId": "="
}
}
}
},
"Microsoft.DataFactory/factories/triggers": {
"properties": {
"pipelines": [{
"parameters": {
"*": "="
}
},
"pipelineReference.referenceName"
],
"pipeline": {
"parameters": {
"*": "="
}
},
"typeProperties": {
"scope": "="
}
}
},
"Microsoft.DataFactory/factories/linkedServices": {
"*": {
"properties": {
"typeProperties": {
"accountName": "=",
"username": "=",
"userName": "=",
"accessKeyId": "=",
"servicePrincipalId": "=",
"userId": "=",
"clientId": "=",
"clusterUserName": "=",
"clusterSshUserName": "=",
"hostSubscriptionId": "=",
"clusterResourceGroup": "=",
"subscriptionId": "=",
"resourceGroupName": "=",
"tenant": "=",
"dataLakeStoreUri": "=",
"baseUrl": "=",
"database": "=",
"serviceEndpoint": "=",
"batchUri": "=",
"poolName": "=",
"databaseName": "=",
"systemNumber": "=",
"server": "=",
"url":"=",
"aadResourceId": "=",
"connectionString": "|:-connectionString:secureString",
"existingClusterId": "-"
}
}
},
"Odbc": {
"properties": {
"typeProperties": {
"userName": "=",
"connectionString": {
"secretName": "="
}
}
}
}
},
"Microsoft.DataFactory/factories/datasets": {
"*": {
"properties": {
"typeProperties": {
"folderPath": "=",
"fileName": "="
}
}
}}
}
If you've set up CI/CD for your data factories, you might exceed the Azure Resource Manager template limits as your factory grows bigger. For example, one limit is the maximum number of resources in a Resource Manager template. To accommodate large factories while generating the full Resource Manager template for a factory, Data Factory now generates linked Resource Manager templates. With this feature, the entire factory payload is broken down into several files so that you aren't constrained by the limits.
If you've configured Git, the linked templates are generated and saved alongside the full Resource Manager templates in the adf_publish branch in a new folder called linkedTemplates:
The linked Resource Manager templates usually consist of a master template and a set of child templates that are linked to the master. The parent template is called ArmTemplate_master.json, and child templates are named with the pattern ArmTemplate_0.json, ArmTemplate_1.json, and so on.
To use linked templates instead of the full Resource Manager template, update your CI/CD task to point to ArmTemplate_master.json instead of ArmTemplateForFactory.json (the full Resource Manager template). Resource Manager also requires that you upload the linked templates into a storage account so Azure can access them during deployment. For more info, see Deploying linked Resource Manager templates with VSTS.
Remember to add the Data Factory scripts in your CI/CD pipeline before and after the deployment task.
If you don't have Git configured, you can access the linked templates via Export ARM Template in the ARM Template list.
If you deploy a factory to production and realize there's a bug that needs to be fixed right away, but you can't deploy the current collaboration branch, you might need to deploy a hotfix. This approach is as known as quick-fix engineering or QFE.
-
In Azure DevOps, go to the release that was deployed to production. Find the last commit that was deployed.
-
From the commit message, get the commit ID of the collaboration branch.
-
Create a new hotfix branch from that commit.
-
Go to the Azure Data Factory UX and switch to the hotfix branch.
-
By using the Azure Data Factory UX, fix the bug. Test your changes.
-
After the fix is verified, select Export ARM Template to get the hotfix Resource Manager template.
-
Manually check this build into the adf_publish branch.
-
If you've configured your release pipeline to automatically trigger based on adf_publish check-ins, a new release will start automatically. Otherwise, manually queue a release.
-
Deploy the hotfix release to the test and production factories. This release contains the previous production payload plus the fix that you made in step 5.
-
Add the changes from the hotfix to the development branch so that later releases won't include the same bug.
See the video below an in-depth video tutorial on how to hot-fix your environments.
[!VIDEO https://www.microsoft.com/videoplayer/embed/RE4I7fi]
When working on a team, there are instances where you may merge changes, but don't want them to be ran in elevated environments such as PROD and QA. To handle this scenario, the ADF team recommends the DevOps concept of using feature flags. In ADF, you can combine global parameters and the if condition activity to hide sets of logic based upon these environment flags.
To learn how to set up a feature flag, see the below video tutorial:
[!VIDEO https://www.microsoft.com/videoplayer/embed/RE4IxdW]
If you're using Git integration with your data factory and have a CI/CD pipeline that moves your changes from development into test and then to production, we recommend these best practices:
-
Git integration. Configure only your development data factory with Git integration. Changes to test and production are deployed via CI/CD and don't need Git integration.
-
Pre- and post-deployment script. Before the Resource Manager deployment step in CI/CD, you need to complete certain tasks, like stopping and restarting triggers and performing cleanup. We recommend that you use PowerShell scripts before and after the deployment task. For more information, see Update active triggers. The data factory team has provided a script to use located at the bottom of this page.
-
Integration runtimes and sharing. Integration runtimes don't change often and are similar across all stages in your CI/CD. So Data Factory expects you to have the same name and type of integration runtime across all stages of CI/CD. If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can use this shared factory in all of your environments as a linked integration runtime type.
-
Managed private endpoint deployment. If a private endpoint already exists in a factory and you try to deploy an ARM template that contains a private endpoint with the same name but with modified properties, the deployment will fail. In other words, you can successfully deploy a private endpoint as long as it has the same properties as the one that already exists in the factory. If any property is different between environments, you can override it by parameterizing that property and providing the respective value during deployment.
-
Key Vault. When you use linked services whose connection information is stored in Azure Key Vault, it is recommended to keep separate key vaults for different environments. You can also configure separate permission levels for each key vault. For example, you might not want your team members to have permissions to production secrets. If you follow this approach, we recommend that you to keep the same secret names across all stages. If you keep the same secret names, you don't need to parameterize each connection string across CI/CD environments because the only thing that changes is the key vault name, which is a separate parameter.
-
Resource naming Due to ARM template constraints, issues in deployment may arise if your resources contain spaces in the name. The Azure Data Factory team recommends using '_' or '-' characters instead of spaces for resources. For example, 'Pipeline_1' would be a preferable name over 'Pipeline 1'.
-
By design, Data Factory doesn't allow cherry-picking of commits or selective publishing of resources. Publishes will include all changes made in the data factory.
- Data factory entities depend on each other. For example, triggers depend on pipelines, and pipelines depend on datasets and other pipelines. Selective publishing of a subset of resources could lead to unexpected behaviors and errors.
- On rare occasions when you need selective publishing, consider using a hotfix. For more information, see Hotfix production environment.
-
The Azure Data Factory team doesn’t recommend assigning Azure RBAC controls to individual entities (pipelines, datasets, etc.) in a data factory. For example, if a developer has access to a pipeline or a dataset, they should be able to access all pipelines or datasets in the data factory. If you feel that you need to implement many Azure roles within a data factory, look at deploying a second data factory.
-
You can't publish from private branches.
-
You can't currently host projects on Bitbucket.
The following sample script can be used to stop triggers before deployment and restart them afterward. The script also includes code to delete resources that have been removed. Save the script in an Azure DevOps git repository and reference it via an Azure PowerShell task using version 4.*.
When running a pre-deployment script, you will need to specify a variation of the following parameters in the Script Arguments field.
-armTemplate "$(System.DefaultWorkingDirectory)/<your-arm-template-location>" -ResourceGroupName <your-resource-group-name> -DataFactoryName <your-data-factory-name> -predeployment $true -deleteDeployment $false
When running a post-deployment script, you will need to specify a variation of the following parameters in the Script Arguments field.
-armTemplate "$(System.DefaultWorkingDirectory)/<your-arm-template-location>" -ResourceGroupName <your-resource-group-name> -DataFactoryName <your-data-factory-name> -predeployment $false -deleteDeployment $true
Here is the script that can be used for pre- and post-deployment. It accounts for deleted resources and resource references.
param
(
[parameter(Mandatory = $false)] [String] $armTemplate,
[parameter(Mandatory = $false)] [String] $ResourceGroupName,
[parameter(Mandatory = $false)] [String] $DataFactoryName,
[parameter(Mandatory = $false)] [Bool] $predeployment=$true,
[parameter(Mandatory = $false)] [Bool] $deleteDeployment=$false
)
function getPipelineDependencies {
param([System.Object] $activity)
if ($activity.Pipeline) {
return @($activity.Pipeline.ReferenceName)
} elseif ($activity.Activities) {
$result = @()
$activity.Activities | ForEach-Object{ $result += getPipelineDependencies -activity $_ }
return $result
} elseif ($activity.ifFalseActivities -or $activity.ifTrueActivities) {
$result = @()
$activity.ifFalseActivities | Where-Object {$_ -ne $null} | ForEach-Object{ $result += getPipelineDependencies -activity $_ }
$activity.ifTrueActivities | Where-Object {$_ -ne $null} | ForEach-Object{ $result += getPipelineDependencies -activity $_ }
return $result
} elseif ($activity.defaultActivities) {
$result = @()
$activity.defaultActivities | ForEach-Object{ $result += getPipelineDependencies -activity $_ }
if ($activity.cases) {
$activity.cases | ForEach-Object{ $_.activities } | ForEach-Object{$result += getPipelineDependencies -activity $_ }
}
return $result
} else {
return @()
}
}
function pipelineSortUtil {
param([Microsoft.Azure.Commands.DataFactoryV2.Models.PSPipeline]$pipeline,
[Hashtable] $pipelineNameResourceDict,
[Hashtable] $visited,
[System.Collections.Stack] $sortedList)
if ($visited[$pipeline.Name] -eq $true) {
return;
}
$visited[$pipeline.Name] = $true;
$pipeline.Activities | ForEach-Object{ getPipelineDependencies -activity $_ -pipelineNameResourceDict $pipelineNameResourceDict} | ForEach-Object{
pipelineSortUtil -pipeline $pipelineNameResourceDict[$_] -pipelineNameResourceDict $pipelineNameResourceDict -visited $visited -sortedList $sortedList
}
$sortedList.Push($pipeline)
}
function Get-SortedPipelines {
param(
[string] $DataFactoryName,
[string] $ResourceGroupName
)
$pipelines = Get-AzDataFactoryV2Pipeline -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName
$ppDict = @{}
$visited = @{}
$stack = new-object System.Collections.Stack
$pipelines | ForEach-Object{ $ppDict[$_.Name] = $_ }
$pipelines | ForEach-Object{ pipelineSortUtil -pipeline $_ -pipelineNameResourceDict $ppDict -visited $visited -sortedList $stack }
$sortedList = new-object Collections.Generic.List[Microsoft.Azure.Commands.DataFactoryV2.Models.PSPipeline]
while ($stack.Count -gt 0) {
$sortedList.Add($stack.Pop())
}
$sortedList
}
function triggerSortUtil {
param([Microsoft.Azure.Commands.DataFactoryV2.Models.PSTrigger]$trigger,
[Hashtable] $triggerNameResourceDict,
[Hashtable] $visited,
[System.Collections.Stack] $sortedList)
if ($visited[$trigger.Name] -eq $true) {
return;
}
$visited[$trigger.Name] = $true;
if ($trigger.Properties.DependsOn) {
$trigger.Properties.DependsOn | Where-Object {$_ -and $_.ReferenceTrigger} | ForEach-Object{
triggerSortUtil -trigger $triggerNameResourceDict[$_.ReferenceTrigger.ReferenceName] -triggerNameResourceDict $triggerNameResourceDict -visited $visited -sortedList $sortedList
}
}
$sortedList.Push($trigger)
}
function Get-SortedTriggers {
param(
[string] $DataFactoryName,
[string] $ResourceGroupName
)
$triggers = Get-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName
$triggerDict = @{}
$visited = @{}
$stack = new-object System.Collections.Stack
$triggers | ForEach-Object{ $triggerDict[$_.Name] = $_ }
$triggers | ForEach-Object{ triggerSortUtil -trigger $_ -triggerNameResourceDict $triggerDict -visited $visited -sortedList $stack }
$sortedList = new-object Collections.Generic.List[Microsoft.Azure.Commands.DataFactoryV2.Models.PSTrigger]
while ($stack.Count -gt 0) {
$sortedList.Add($stack.Pop())
}
$sortedList
}
function Get-SortedLinkedServices {
param(
[string] $DataFactoryName,
[string] $ResourceGroupName
)
$linkedServices = Get-AzDataFactoryV2LinkedService -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName
$LinkedServiceHasDependencies = @('HDInsightLinkedService', 'HDInsightOnDemandLinkedService', 'AzureBatchLinkedService')
$Akv = 'AzureKeyVaultLinkedService'
$HighOrderList = New-Object Collections.Generic.List[Microsoft.Azure.Commands.DataFactoryV2.Models.PSLinkedService]
$RegularList = New-Object Collections.Generic.List[Microsoft.Azure.Commands.DataFactoryV2.Models.PSLinkedService]
$AkvList = New-Object Collections.Generic.List[Microsoft.Azure.Commands.DataFactoryV2.Models.PSLinkedService]
$linkedServices | ForEach-Object {
if ($_.Properties.GetType().Name -in $LinkedServiceHasDependencies) {
$HighOrderList.Add($_)
}
elseif ($_.Properties.GetType().Name -eq $Akv) {
$AkvList.Add($_)
}
else {
$RegularList.Add($_)
}
}
$SortedList = New-Object Collections.Generic.List[Microsoft.Azure.Commands.DataFactoryV2.Models.PSLinkedService]($HighOrderList.Count + $RegularList.Count + $AkvList.Count)
$SortedList.AddRange($HighOrderList)
$SortedList.AddRange($RegularList)
$SortedList.AddRange($AkvList)
$SortedList
}
$templateJson = Get-Content $armTemplate | ConvertFrom-Json
$resources = $templateJson.resources
#Triggers
Write-Host "Getting triggers"
$triggersInTemplate = $resources | Where-Object { $_.type -eq "Microsoft.DataFactory/factories/triggers" }
$triggerNamesInTemplate = $triggersInTemplate | ForEach-Object {$_.name.Substring(37, $_.name.Length-40)}
$triggersDeployed = Get-SortedTriggers -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName
$triggersToStop = $triggersDeployed | Where-Object { $triggerNamesInTemplate -contains $_.Name } | ForEach-Object {
New-Object PSObject -Property @{
Name = $_.Name
TriggerType = $_.Properties.GetType().Name
}
}
$triggersToDelete = $triggersDeployed | Where-Object { $triggerNamesInTemplate -notcontains $_.Name } | ForEach-Object {
New-Object PSObject -Property @{
Name = $_.Name
TriggerType = $_.Properties.GetType().Name
}
}
$triggersToStart = $triggersInTemplate | Where-Object { $_.properties.runtimeState -eq "Started" -and ($_.properties.pipelines.Count -gt 0 -or $_.properties.pipeline.pipelineReference -ne $null)} | ForEach-Object {
New-Object PSObject -Property @{
Name = $_.name.Substring(37, $_.name.Length-40)
TriggerType = $_.Properties.type
}
}
if ($predeployment -eq $true) {
#Stop all triggers
Write-Host "Stopping deployed triggers`n"
$triggersToStop | ForEach-Object {
if ($_.TriggerType -eq "BlobEventsTrigger") {
Write-Host "Unsubscribing" $_.Name "from events"
$status = Remove-AzDataFactoryV2TriggerSubscription -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name
while ($status.Status -ne "Disabled"){
Start-Sleep -s 15
$status = Get-AzDataFactoryV2TriggerSubscriptionStatus -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name
}
}
Write-Host "Stopping trigger" $_.Name
Stop-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name -Force
}
}
else {
#Deleted resources
#pipelines
Write-Host "Getting pipelines"
$pipelinesADF = Get-SortedPipelines -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName
$pipelinesTemplate = $resources | Where-Object { $_.type -eq "Microsoft.DataFactory/factories/pipelines" }
$pipelinesNames = $pipelinesTemplate | ForEach-Object {$_.name.Substring(37, $_.name.Length-40)}
$deletedpipelines = $pipelinesADF | Where-Object { $pipelinesNames -notcontains $_.Name }
#dataflows
$dataflowsADF = Get-AzDataFactoryV2DataFlow -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName
$dataflowsTemplate = $resources | Where-Object { $_.type -eq "Microsoft.DataFactory/factories/dataflows" }
$dataflowsNames = $dataflowsTemplate | ForEach-Object {$_.name.Substring(37, $_.name.Length-40) }
$deleteddataflow = $dataflowsADF | Where-Object { $dataflowsNames -notcontains $_.Name }
#datasets
Write-Host "Getting datasets"
$datasetsADF = Get-AzDataFactoryV2Dataset -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName
$datasetsTemplate = $resources | Where-Object { $_.type -eq "Microsoft.DataFactory/factories/datasets" }
$datasetsNames = $datasetsTemplate | ForEach-Object {$_.name.Substring(37, $_.name.Length-40) }
$deleteddataset = $datasetsADF | Where-Object { $datasetsNames -notcontains $_.Name }
#linkedservices
Write-Host "Getting linked services"
$linkedservicesADF = Get-SortedLinkedServices -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName
$linkedservicesTemplate = $resources | Where-Object { $_.type -eq "Microsoft.DataFactory/factories/linkedservices" }
$linkedservicesNames = $linkedservicesTemplate | ForEach-Object {$_.name.Substring(37, $_.name.Length-40)}
$deletedlinkedservices = $linkedservicesADF | Where-Object { $linkedservicesNames -notcontains $_.Name }
#Integrationruntimes
Write-Host "Getting integration runtimes"
$integrationruntimesADF = Get-AzDataFactoryV2IntegrationRuntime -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName
$integrationruntimesTemplate = $resources | Where-Object { $_.type -eq "Microsoft.DataFactory/factories/integrationruntimes" }
$integrationruntimesNames = $integrationruntimesTemplate | ForEach-Object {$_.name.Substring(37, $_.name.Length-40)}
$deletedintegrationruntimes = $integrationruntimesADF | Where-Object { $integrationruntimesNames -notcontains $_.Name }
#Delete resources
Write-Host "Deleting triggers"
$triggersToDelete | ForEach-Object {
Write-Host "Deleting trigger " $_.Name
$trig = Get-AzDataFactoryV2Trigger -name $_.Name -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName
if ($trig.RuntimeState -eq "Started") {
if ($_.TriggerType -eq "BlobEventsTrigger") {
Write-Host "Unsubscribing trigger" $_.Name "from events"
$status = Remove-AzDataFactoryV2TriggerSubscription -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name
while ($status.Status -ne "Disabled"){
Start-Sleep -s 15
$status = Get-AzDataFactoryV2TriggerSubscriptionStatus -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name
}
}
Stop-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name -Force
}
Remove-AzDataFactoryV2Trigger -Name $_.Name -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Force
}
Write-Host "Deleting pipelines"
$deletedpipelines | ForEach-Object {
Write-Host "Deleting pipeline " $_.Name
Remove-AzDataFactoryV2Pipeline -Name $_.Name -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Force
}
Write-Host "Deleting dataflows"
$deleteddataflow | ForEach-Object {
Write-Host "Deleting dataflow " $_.Name
Remove-AzDataFactoryV2DataFlow -Name $_.Name -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Force
}
Write-Host "Deleting datasets"
$deleteddataset | ForEach-Object {
Write-Host "Deleting dataset " $_.Name
Remove-AzDataFactoryV2Dataset -Name $_.Name -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Force
}
Write-Host "Deleting linked services"
$deletedlinkedservices | ForEach-Object {
Write-Host "Deleting Linked Service " $_.Name
Remove-AzDataFactoryV2LinkedService -Name $_.Name -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Force
}
Write-Host "Deleting integration runtimes"
$deletedintegrationruntimes | ForEach-Object {
Write-Host "Deleting integration runtime " $_.Name
Remove-AzDataFactoryV2IntegrationRuntime -Name $_.Name -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Force
}
if ($deleteDeployment -eq $true) {
Write-Host "Deleting ARM deployment ... under resource group: " $ResourceGroupName
$deployments = Get-AzResourceGroupDeployment -ResourceGroupName $ResourceGroupName
$deploymentsToConsider = $deployments | Where { $_.DeploymentName -like "ArmTemplate_master*" -or $_.DeploymentName -like "ArmTemplateForFactory*" } | Sort-Object -Property Timestamp -Descending
$deploymentName = $deploymentsToConsider[0].DeploymentName
Write-Host "Deployment to be deleted: " $deploymentName
$deploymentOperations = Get-AzResourceGroupDeploymentOperation -DeploymentName $deploymentName -ResourceGroupName $ResourceGroupName
$deploymentsToDelete = $deploymentOperations | Where { $_.properties.targetResource.id -like "*Microsoft.Resources/deployments*" }
$deploymentsToDelete | ForEach-Object {
Write-host "Deleting inner deployment: " $_.properties.targetResource.id
Remove-AzResourceGroupDeployment -Id $_.properties.targetResource.id
}
Write-Host "Deleting deployment: " $deploymentName
Remove-AzResourceGroupDeployment -ResourceGroupName $ResourceGroupName -Name $deploymentName
}
#Start active triggers - after cleanup efforts
Write-Host "Starting active triggers"
$triggersToStart | ForEach-Object {
if ($_.TriggerType -eq "BlobEventsTrigger") {
Write-Host "Subscribing" $_.Name "to events"
$status = Add-AzDataFactoryV2TriggerSubscription -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name
while ($status.Status -ne "Enabled"){
Start-Sleep -s 15
$status = Get-AzDataFactoryV2TriggerSubscriptionStatus -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name
}
}
Write-Host "Starting trigger" $_.Name
Start-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.Name -Force
}
}