diff --git a/docs/2.0/docs/accountfactory/concepts/delegated-repositories.md b/docs/2.0/docs/accountfactory/concepts/delegated-repositories.md index 4cbd6f47e..fa3971fd1 100644 --- a/docs/2.0/docs/accountfactory/concepts/delegated-repositories.md +++ b/docs/2.0/docs/accountfactory/concepts/delegated-repositories.md @@ -15,7 +15,7 @@ Using this pattern core platform teams have the ability to: * Enforce least-privilege access to IaC, restricting teams to have access only to IaC and deployment capability / role assumptions for resources that they are responsible for. ## Delegated Repository Creation -Delegated repositories are optionally created by [Account Factory](/2.0/docs/accountfactory/concepts) during account creation. A delegated account vend follows the following (automated) workflow: +Delegated repositories are optionally created by [Account Factory](/2.0/docs/accountfactory/concepts) during account creation. A delegated account vend follows the following (automated) workflow: ```mermaid sequenceDiagram diff --git a/docs/2.0/docs/accountfactory/installation/index.md b/docs/2.0/docs/accountfactory/installation/index.md index ce8564b59..41104041b 100644 --- a/docs/2.0/docs/accountfactory/installation/index.md +++ b/docs/2.0/docs/accountfactory/installation/index.md @@ -6,7 +6,7 @@ Account Factory is automatically added to [new Pipelines root repositories](/2.0 Out of the box Account Factory has the following components: -- 📋 An HTML form for generating workflow inputs: `.github/workflows/account-factory-inputs.html` +- 📋 An HTML form for generating workflow inputs: `.github/workflows/account-factory-inputs.html` - 🏭 A workflow for generating new requests: `.github/workflows/account-factory.yml` diff --git a/docs/2.0/docs/accountfactory/tutorials/modify-account.md b/docs/2.0/docs/accountfactory/tutorials/modify-account.md index 4a11239ac..cf7f01073 100644 --- a/docs/2.0/docs/accountfactory/tutorials/modify-account.md +++ b/docs/2.0/docs/accountfactory/tutorials/modify-account.md @@ -1,5 +1,3 @@ - - # Modifying an AWS Account Over time you will need to run various operations on your AWS accounts such as requesting new accounts, creating new accounts, renaming accounts, etc. With the Gruntwork Account Factory, some AWS account management operations should only be done using IaC, some can only be done using ClickOps, and some can be done using either. diff --git a/docs/2.0/docs/accountfactory/tutorials/remove-account.md b/docs/2.0/docs/accountfactory/tutorials/remove-account.md index 3c370787d..1f6bf336a 100644 --- a/docs/2.0/docs/accountfactory/tutorials/remove-account.md +++ b/docs/2.0/docs/accountfactory/tutorials/remove-account.md @@ -1,5 +1,3 @@ - - # Removing an AWS Account ## Prerequisites @@ -15,7 +13,7 @@ We recommend following a two step procedure to close AWS Accounts managed by Dev 1. [Cleanup Infrastructure Code](#1-cleanup-infrastructure-code) and modify OpenTofu/Terraform state for the Control Tower module. 1. [Close Account with Clickops](#2-close-the-accounts-in-aws-organizations) -We are recommending that accounts be closed with ClickOps instead of using Gruntwork Pipelines. Removing the account via pipelines by deleting the account request file can and often does work, however the underlying AWS Service Catalog that we use to interact with Control Tower and deprovision the account is not (https://github.com/hashicorp/terraform-provider-aws/issues/31705) and often returns spurious errors that can require multiple retries to complete successfully. The procedure here is fundamentally about working around that unreliability. +We are recommending that accounts be closed with ClickOps instead of using Gruntwork Pipelines. Removing the account via pipelines by deleting the account request file can and often does work, however the underlying AWS Service Catalog that we use to interact with Control Tower and deprovision the account is not (https://github.com/hashicorp/terraform-provider-aws/issues/31705) and often returns spurious errors that can require multiple retries to complete successfully. The procedure here is fundamentally about working around that unreliability. ### 1. Cleanup Infrastructure Code diff --git a/docs/2.0/docs/pipelines/guides/extending-pipelines.md b/docs/2.0/docs/pipelines/guides/extending-pipelines.md index 48a2ef8b8..5f3fcd628 100644 --- a/docs/2.0/docs/pipelines/guides/extending-pipelines.md +++ b/docs/2.0/docs/pipelines/guides/extending-pipelines.md @@ -1,61 +1,59 @@ # Extending Your Pipeline -Gruntwork Pipelines is designed to be extensible. This means that you can add your own customizations to the GitHub Actions Workflows, and the underlying custom GitHub Actions to suit your organization's needs. This document will guide you through the process of extending your pipeline. +Gruntwork Pipelines is designed to be extensible, enabling users to tailor GitHub Actions workflows and underlying custom actions to align with their organization’s unique requirements. This guide explains how to extend your pipeline. +## Pipelines extension architecture -## Pipelines Extension Architecture +Extending Gruntwork Pipelines requires managing code across three distinct repositories. This architecture segregates customer-specific modifications from Gruntwork-maintained code, minimizing conflicts and simplifying updates. The repositories are as follows: -Extending Gruntwork Pipelines involves managing code in three different source code repositories. We've architected these repositories in such a way that customer modifications live in a different repository from code that Gruntwork maintains, therefore dramatically limiting, or even eliminating, the work required to incorporate upstream changes from Gruntwork. The three repositories are: +- **`pipelines-workflows`**: Handles the central orchestration of control flow within pipelines. It contains minimal business logic and primarily calls other repositories to perform tasks. +- **`pipelines-actions`**: Hosts most of the business logic for pipelines. +- **`pipelines-actions-customization`**: Serves as the primary repository for customer-specific custom logic. -* `pipelines-workflows` - This is the central orchestration of the control flow within pipelines. This repo contains as little business logic as possible and generally makes calls to other repositories to do work. -* `pipelines-actions` - This is where the bulk of the business logic for pipelines lives -* `pipelines-actions-customization` - This is where a customer's custom logic primarily lives. - -The intention is that customers will never have to touch code that is frequently modified by Gruntwork, namely `pipelines-actions`. Instead customers will update code references inside `pipelines-workflows` to point to custom code in another repository, so the only surface area for merge conflict/code maintenance is a scant few lines of reference change in `pipelines-workflows`. +This structure ensures that customers rarely need to modify Gruntwork-managed repositories, such as `pipelines-actions`. Instead, customizations typically involve modifying code references in `pipelines-workflows` to point to customized repositories. This approach minimizes the likelihood of merge conflicts or maintenance issues. +## Extend the GitHub Actions workflow + Diagram of Gruntwork Pipelines Repositories -## Extend the GitHub Actions Workflow - -The GitHub Actions Workflow that Pipelines uses is a [Reusable Workflow](https://docs.github.com/en/actions/using-workflows/reusing-workflows). This allows your `infrastructure-live` repositories to reference a specific pinned version of it in your `.github/workflows/pipelines.yml` file without having to host any of the code yourself. - -If you would like to extend this workflow to introduce custom logic that is specific to your organization, you can [fork the repository](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo) or [mirror the repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/duplicating-a-repository). +The Pipelines workflow is implemented as a [Reusable Workflow](https://docs.github.com/en/actions/using-workflows/reusing-workflows). This allows you to reference a specific pinned version in your `.github/workflows/pipelines.yml` file without hosting the workflow code yourself. -Common reasons that you might decide to do this include: +To extend this workflow for custom organizational logic, you can either [fork](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo) or [mirror](https://docs.github.com/en/repositories/creating-and-managing-repositories/duplicating-a-repository) the repository. Common reasons for extending the workflow include: -- You wish to add additional steps to the workflow that are specific to your organization. -- You wish to utilize a forked action used in existing step(s) in the workflow to suit your organization's needs (more on that below). +- Adding organization-specific steps to the workflow. +- Utilizing customized versions of existing actions in the workflow. :::caution -If you choose to fork Gruntwork's `pipelines-workflows` into your GitHub organization note that Gruntwork will have visibility to that repository ([docs](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/about-permissions-and-visibility-of-forks#about-permissions-for-creating-forks)). If you have concerns about this you can instead [mirror](https://docs.github.com/en/repositories/creating-and-managing-repositories/duplicating-a-repository) the repository. Reach out to [support@gruntwork.io](mailto:support@gruntwork.io) if you need assistance with this. +If you fork Gruntwork's `pipelines-workflows` repository, Gruntwork may have visibility into the forked repository. For privacy concerns, consider mirroring the repository instead. For assistance, contact [support@gruntwork.io](mailto:support@gruntwork.io). -Do not include any sensitive information in your forked repository, especially if hosted publicly. +Avoid including sensitive information in forked repositories, especially if they are public. ::: -## How to Extend the Pipelines Workflow +## How to extend the Pipelines workflow -Once you have created your own version of `pipelines-workflows` you are free to change the code as you wish. One key concern, however, is whether you intend to continue to track upstream changes and incorporate any upstream bugfixes/new features into your workflow. We have endeavored to design `pipelines-workflows` in such a way that Gruntwork should seldom be making changes to those files, and to provide a mechanism for customers to extend `pipelines-workflows` in a way that touches very few lines of Gruntwork maintained code. The goal being to allow you to make customizations, and also merge in upstream changes with very few or no merge conflicts. +Once you've created your version of `pipelines-workflows`, you're free to modify the code. If you plan to track upstream changes, Gruntwork designed pipelines-workflows to require minimal updates and offers ways to customize with little impact on Gruntwork-maintained code. This approach helps you merge upstream changes smoothly with minimal or no conflicts. -Our recommended best practice for customizing `pipelines-workflows` is to do so by injecting [customized actions](#adding-custom-actions) at various points in the workflow. We have already included a handful of entrypoints and sample custom-actions to demonstrate this practice. By using custom actions you minimize the actual lines of code changed in the workflow, and also create a clean data-contract between the workflow and your custom action, making it easier to refactor (if necessary) for future `pipelines-workflow` updates. Again, Gruntwork endeavors to do the bulk of its work on Pipelines within `pipelines-actions` which should mean very few updates will be required. +The recommended approach for customizing `pipelines-workflows` is to inject [custom actions](#adding-custom-actions) at predefined entry points. Gruntwork provides several entry points and sample actions to guide this process. This approach minimizes changes to Gruntwork-maintained files and establishes clear data contracts between workflows and custom actions, making future updates easier. Most changes occur within pipelines-actions, reducing the need for frequent workflow updates. -### Adding Custom Actions +### Adding custom actions #### Procedure -There are many ways to actually implement custom actions in your workflow, this procedure is a step by step guide to the method that Gruntwork recommends as best practice. +This step-by-step guide outlines best practices for implementing custom actions: **Creating the custom action:** -1. Create a new repository `pipelines-actions-customizations` -1. Create a new folder in that repository, `.github/actions/` -1. Identify where in the workflow you want to customize (we provide a set of [example](https://github.com/gruntwork-io/pipelines-actions/tree/main/.github/custom-actions) actions and [custom-action hook locations](https://github.com/gruntwork-io/pipelines-workflows/blob/main/.github/workflows/pipelines-root.yml) for reference). -1. If you're using one of our default hook points, copy the stub example hook `action.yml` file from the `pipelines-actions` [repository](https://github.com/gruntwork-io/pipelines-actions/tree/main/.github/custom-actions) and place it into `.github/actions/$HOOK_NAME/action.yml`. - 1. If you're not using a sample hook it may still be helpful to copy an existing one for reference, particularly around the inputs that the custom actions accept. -1. At this point you can customize `action.yml` to execute your desired logic +1. Create a new repository, `pipelines-actions-customizations`. +2. Add a folder named `.github/actions/` to the repository. +3. Identify the appropriate workflow location for customization (use [examples](https://github.com/gruntwork-io/pipelines-actions/tree/main/.github/custom-actions) and [custom-action hook locations](https://github.com/gruntwork-io/pipelines-workflows/blob/main/.github/workflows/pipelines-root.yml) as references). +4. For default hook points, copy the corresponding stub `action.yml` file from the `pipelines-actions` repository to `.github/actions/$HOOK_NAME/action.yml`. + - For non-standard hooks, review an existing action.yml file for guidance, especially for input definitions. +5. Modify `action.yml` to define your custom logic. **Adding the custom action to your workflow:** -1. Create a fork or mirror of the `pipelines-workflows` [repository](https://github.com/gruntwork-io/pipelines-workflows). -1. Identify where you want to run your custom action -1. Add a step to checkout your custom action repository +1. Fork or mirror the `pipelines-workflows` repository. +2. Locate the workflow section where the custom action should run. +3. Add a step to check out your custom actions repository. + ```yml - name: Checkout ACME's Custom Pipelines Actions uses: actions/checkout@v4 @@ -65,7 +63,8 @@ There are many ways to actually implement custom actions in your workflow, this # We recommend pinning this to a specific commit, branch or tag instead of main ref: main ``` -1. Call your custom action. Make sure you pay attention to what inputs you are passing to your custom action. Most custom actions will need access to tokens (e.g. `PIPELINES_READ_TOKEN`) as well as the `gruntwork_context` object. The context object contains all of the [outputs](https://github.com/gruntwork-io/pipelines-actions/blob/main/.github/actions/pipelines-bootstrap/action.yml#L43) from the `pipelines-bootstrap` action which includes useful metadata about the current workflow execution. +2. Call your custom action. Ensure you carefully manage the inputs passed to your custom action. Most custom actions require access to tokens (e.g., `PIPELINES_READ_TOKEN`) and the `gruntwork_context` object. This context object contains all relevant [outputs](https://github.com/gruntwork-io/pipelines-actions/blob/main/.github/actions/pipelines-bootstrap/action.yml#L43) from the `pipelines-bootstrap` action, providing valuable metadata about the current workflow execution. + ```yml - name: "[Baseline]: Pre Provision New Account Custom Action" uses: ./pipelines-actions-customizations/.github/actions/pre-provision-new-account @@ -77,7 +76,7 @@ There are many ways to actually implement custom actions in your workflow, this ``` #### Background / Explanation -Out of the box the `pipelines-root.yml` file comes with several sample custom actions. Here is an example of the pre-provision new account custom hook: +The `pipelines-root.yml` file includes several sample custom actions by default. Below is an example of the pre-provision new account custom hook: ```yml - name: Checkout Pipelines Actions @@ -97,7 +96,13 @@ Out of the box the `pipelines-root.yml` file comes with several sample custom ac gruntwork_context: ${{ toJson(steps.gruntwork_context.outputs) }} ``` -There are two key components to the hook, 1) Checking out actions and 2) Running the custom action. As Pipelines is called as a [reusable workflow](https://docs.github.com/en/actions/using-workflows/reusing-workflows#calling-a-reusable-workflow) it does not have access to any other code by default, even in its own repository. This means that any external code has to be explicitly brought in, either via a checkout or via a repository reference your code has access to. In our examples we store the custom-action stub examples in the same repository as the actual pipelines actions. You will need to store your custom actions in your own repository and either checkout that code or make it available to be called directly from this workflow. For example: +There are two key components to the hook: + +1. **Checking out actions**: Since Pipelines is invoked as a [reusable workflow](https://docs.github.com/en/actions/using-workflows/reusing-workflows#calling-a-reusable-workflow), it does not have inherent access to any other code, even within its own repository. To use external code, it must be explicitly included either by checking out the necessary repository or referencing a repository accessible to the workflow. + +2. **Running the custom action**: Custom actions must be stored in a repository you control. In the provided examples, custom-action stubs are stored in the same repository as the Pipelines actions. For your implementation, ensure your custom actions are stored in your repository, and bring them into the workflow by checking out the code or referencing the repository directly. + +For example: ```yml - name: Checkout Pipelines Actions @@ -125,15 +130,14 @@ There are two key components to the hook, 1) Checking out actions and 2) Running gruntwork_context: ${{ toJson(steps.gruntwork_context.outputs) }} ``` -### Support for extending Workflows - -We, at Gruntwork, want to make sure we're addressing real business use-cases with our documentation, so if you have a need to extend the Pipelines Workflow and are not comfortable with doing so following the documentation above, please reach out to us at [support@gruntwork.io](mailto:support@gruntwork.io). +### Support for extending workflows +At Gruntwork, we are committed to addressing real-world business needs with our documentation. If you require assistance in extending the Pipelines Workflow and are not comfortable following the steps outlined above, please reach out to us at [support@gruntwork.io](mailto:support@gruntwork.io). -## Extending the GitHub Actions +## Extending GitHub Actions -In addition to extending the top-level workflow, you can also extend the underlying custom GitHub Actions that the workflow uses. This allows you to customize the behavior of individual Actions to suit your organization's needs. +Beyond extending the top-level workflow, you can also modify the underlying custom GitHub Actions that the workflow employs. This approach allows for precise customization of the behavior of individual Actions to meet your organization's specific requirements. :::note -In order to customize the behavior of an Action, you will need to fork the repository that contains the Action, which might be another GitHub Action, or a Workflow. +To customize the behavior of an Action, you must fork the repository that contains the Action. This repository may house another GitHub Action or a Workflow. ::: diff --git a/docs/2.0/docs/pipelines/guides/installing-drift-detection.md b/docs/2.0/docs/pipelines/guides/installing-drift-detection.md index 70914eaaa..7610a8cf6 100644 --- a/docs/2.0/docs/pipelines/guides/installing-drift-detection.md +++ b/docs/2.0/docs/pipelines/guides/installing-drift-detection.md @@ -3,26 +3,26 @@ import PersistentCheckbox from '/src/components/PersistentCheckbox'; :::note -Pipelines Drift Detection is only available to Devops Foundations Enterprise customers. +Pipelines Drift Detection is exclusively available to DevOps Foundations Enterprise customers. ::: -If you're creating new pipelines repositories using the latest version of Pipelines, then Drift Detection will be installed automatically without any action on your part. +For new pipelines repositories using the latest version of Pipelines, Drift Detection is installed automatically and requires no additional action. -If you want to upgrade an older repository to add Drift Detection perform the following steps: +To upgrade an existing repository and enable Drift Detection, follow these steps: -### Step 1 - Ensure the GitHub App is Installed +### Step 1 - Ensure the GitHub App is installed -Ensure you are using the [GitHub App](/2.0/docs/pipelines/installation/viagithubapp) in this repository. Drift Detection requires permissions from the GitHub App and is not available via machine user tokens. +Verify that the [GitHub App](/2.0/docs/pipelines/installation/viagithubapp) is installed and in use for this repository. Drift Detection relies on permissions granted by the GitHub App and is not compatible with machine user tokens. -### Step 2 - Setup the Workflow file +### Step 2 - Set up the workflow file -Create a new workflow file in your repository at `.github/workflows/pipelines-drift-detection.yml` +Create a new workflow file in your repository at `.github/workflows/pipelines-drift-detection.yml`. -This is the same directory where your other Pipelines workflows are located. +This directory is the same location as your other Pipelines workflows. -Add the following content to the workflow +Add the following content to the workflow: ```yml name: Pipelines Drift Detection @@ -51,7 +51,7 @@ jobs: branch-name: ${{ inputs.branch-name }} ``` -Commit the changes to the repository. If you are using [branch protection](/2.0/docs/pipelines/installation/branch-protection) (highly recommended) you will need to create a new pull request to add the workflow. +Commit the changes to the repository. If [branch protection](/2.0/docs/pipelines/installation/branch-protection) is enabled—which is strongly recommended—you must create a new pull request to incorporate the workflow into your repository. diff --git a/docs/2.0/docs/pipelines/guides/managing-secrets.md b/docs/2.0/docs/pipelines/guides/managing-secrets.md index f9f0c6664..636273fd5 100644 --- a/docs/2.0/docs/pipelines/guides/managing-secrets.md +++ b/docs/2.0/docs/pipelines/guides/managing-secrets.md @@ -1,30 +1,19 @@ # Secrets -Continuous Integration systems frequently need access to sensitive resources, and as a consequence, they also frequently -need access to secrets to authenticate to those resources. These secrets can be anything from API keys to passwords to -certificates. +Continuous Integration systems often require access to sensitive resources, which necessitates the use of secrets such as API keys, passwords, or certificates. Pipelines is designed to minimize the use of long-lived secrets and instead leverages ephemeral credentials whenever possible. This approach reduces the risk of credential leaks and streamlines secret rotation. -The current design in Pipelines is to minimize the number of secrets that have to be used, and to leverage ephemeral -credentials whenever possible. This is done to minimize the risk of long lived secrets being leaked, and to make it -easier to rotate secrets when necessary. +The only long-lived credentials you must create, rotate, and maintain for Pipelines are those used to authenticate GitHub Machine Users. For more details, refer to the [GitHub Machine Users documentation](/2.0/docs/pipelines/installation/viamachineusers). We are continuously working to enhance the security of Pipelines and aim to further reduce this requirement over time. -The only long lived credentials that you are required to create, rotate and maintain to use Pipelines are the -credentials used to authenticate as your [GitHub Machine users](/2.0/docs/pipelines/installation/viamachineusers). We are constantly looking for ways -to improve the security posture of Pipelines, and are actively working on ways to reduce even this minimal requirement. +## Authenticating with GitHub -## Authenticating With GitHub +To interact with the GitHub API, Pipelines uses either a GitHub App or Machine User [Personal Access Tokens (PATs)](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens), depending on your installation method. For information on creating and managing these tokens, see the [Machine Users documentation](/2.0/docs/pipelines/installation/viamachineusers). -To authenticate with GitHub, Pipelines uses either a GitHub App or Machine User [Personal Access Tokens (PATs)](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens) (depending on how you installed Pipelines) to authenticate and interact with the GitHub API. You can -learn more about how these tokens are created and managed in the [Machine Users](/2.0/docs/pipelines/installation/viamachineusers) documentation. +## Authenticating with AWS -## Authenticating With AWS +Pipelines requires authentication with AWS but avoids long-lived credentials by utilizing [OIDC](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services). OIDC establishes an authenticated relationship between a specific Git reference in a repository and a corresponding AWS role, enabling Pipelines to assume the role based on where the pipeline is executed. -At a minimum, Pipelines also needs to authenticate with AWS. It does not do so with long lived credentials, however. -Instead, it leverages [OIDC to authenticate with AWS](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services). OIDC allows for an authenticated relationship to be defined between -a particular git reference in a particular repository with a particular role in AWS, allowing pipelines to assume a role -simply by virtue of where that pipeline is run. +The role assumption process operates as follows: -The process for this role assumption looks like the following: ```mermaid sequenceDiagram @@ -37,27 +26,18 @@ sequenceDiagram AWS STS->>GitHub Actions: Temporary AWS Credentials ``` -As a consequence, Pipelines does not need to store any long lived AWS credentials, and can instead rely on the ephemeral -credentials that are generated by AWS STS that grant least privilege access to the resources required for the operation -it is engaging in (e.g. read access during PR open and write access during PR merge). +As a result, Pipelines avoids storing long-lived AWS credentials and instead relies on ephemeral credentials generated by AWS STS. These credentials grant least-privilege access to the resources needed for the specific operation being performed (e.g., read access during a pull request open event or write access during a merge). -## Other Providers +## Other providers -If you are managing the configuration for other services in IAC, using Terragrunt, you may need to configure a provider -for that service in Pipelines. In that case, you will need to provide the necessary credentials to authenticate with -that provider. We recommend that whenever possible, use the same principles as we do with AWS, and leverage ephemeral -credentials to authenticate with the provider, granting least privilege access to the resources required for the -operation and avoid writing long lived credentials to disk. +If you are managing configurations for additional services using Infrastructure as Code (IaC) tools like Terragrunt, you may need to configure a provider for those services in Pipelines. In such cases, you must supply the necessary credentials for authenticating with the provider. Whenever possible, follow the same principles applied to AWS: use ephemeral credentials, grant only the minimum permissions required, and avoid storing long-lived credentials on disk. -### Configuring Providers in Terragrunt +### Configuring providers in Terragrunt -For example, let's say you are configuring the [CloudFlare Terraform provider](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs). +For example, consider configuring the [Cloudflare Terraform provider](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs). This provider supports multiple authentication methods to enable secure API calls to Cloudflare services. To authenticate with Cloudflare and manage the associated credentials securely, you need to configure your `terragrunt.hcl` file appropriately. -There are multiple methods of authentication supported by CloudFlare in this provider to allow you to make authenticated -API calls to CloudFlare services. How should you configure your `terragrunt.hcl` file to authenticate with CloudFlare, -and how should you manage the credentials required for Terragrunt to access these secrets safely? +First, examine the default AWS authentication provider setup in the root `terragrunt.hcl` file: -First, take a look at the provider generated by default in the root `terragrunt.hcl` file for AWS authentication: ```hcl generate "provider" { @@ -77,14 +57,11 @@ EOF } ``` -This provider block is generated on the fly whenever a `terragrunt` command is run, and provides the necessary -configuration for the AWS provider to discover the credentials that have been made available to the environment by -the [configure-aws-credentials](https://github.com/aws-actions/configure-aws-credentials) GitHub Action. +This provider block is dynamically generated during the execution of any `terragrunt` command and supplies the AWS provider with the required configuration to discover credentials made available by the [configure-aws-credentials](https://github.com/aws-actions/configure-aws-credentials) GitHub Action. -No secrets are written to disk to support this, and secrets are discovered at runtime by the AWS provider. +With this approach, no secrets are written to disk. Instead, the AWS provider dynamically retrieves secrets at runtime. -Looking at CloudFlare documentation, there are multiple methods of authenticating the CloudFlare provider, including the -use of the [api_token](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs#api_key) field in the `provider` block, as shown in the documentation: +According to the Cloudflare documentation, the Cloudflare provider supports several authentication methods. One option involves using the [api_token](https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs#api_key) field in the `provider` block, as illustrated in the documentation: ```hcl generate "cloudflare_provider" { @@ -98,11 +75,9 @@ EOF } ``` -Populating this `var.cloudflare_api_token` for the provider requires having a `variable "cloudflare_api_token" {}` -block in a `.tf` file that is checked into the repository, and setting the `TF_VAR_cloudflare_api_token` environment -variable set to the value of the CloudFlare API token. The easiest way to do this is to leverage the `inputs` value -in `terragrunt.hcl` files to set the value of the `cloudflare_api_token` variable to the value -of the `CLOUDFLARE_API_TOKEN` +To populate the `var.cloudflare_api_token` for the provider, you must include a `variable "cloudflare_api_token" {}` block within a `.tf` file that is committed to the repository. Additionally, the `TF_VAR_cloudflare_api_token` environment variable needs to be set to the corresponding Cloudflare API token value. + +A straightforward method for achieving this is by using the `inputs` attribute in `terragrunt.hcl` files to assign the `cloudflare_api_token` variable a value derived from the `CLOUDFLARE_API_TOKEN` environment variable. ```hcl inputs = { @@ -111,11 +86,11 @@ inputs = { ``` :::note -Here, `fetch-cloudflare-api-token.sh` is a script that fetches the CloudFlare API token from a secret store and prints it to stdout. +In this context, `fetch-cloudflare-api-token.sh` is a script designed to retrieve the Cloudflare API token from a secret store and output it to stdout. -You can use whatever you like to fetch the secret, as long as it prints the secret to stdout. +You are free to use any method to fetch the secret, provided it outputs the value to stdout. -Two simple examples of how you might fetch the secret are: +Here are two straightforward examples of how you might fetch the secret: 1. Using `aws secretsmanager`: @@ -129,15 +104,13 @@ Two simple examples of how you might fetch the secret are: aws ssm get-parameter --name cloudflare-api-token --query Parameter.Value --output text --with-decryption ``` -Given that you are already authenticated with AWS in Pipelines for the sake of interacting with state, -this can be a convenient mechanism for fetching the CloudFlare API token. +Given that Pipelines is already authenticated with AWS for interacting with state, this setup provides a convenient method for retrieving the Cloudflare API token. ::: -Alternatively, note that the `api_token` is an optional value, and in a manner similar to that of the AWS provider, -you can use the `CLOUDFLARE_API_TOKEN` environment variable instead to provide the API token to the provider at runtime. +Alternatively, note that the `api_token` field is optional. Similar to the AWS provider, you can use the `CLOUDFLARE_API_TOKEN` environment variable to supply the API token to the provider at runtime. -To use that, you can modify the `provider` block to look like the following: +To achieve this, you can update the `provider` block as follows: ```hcl generate "cloudflare_provider" { @@ -149,9 +122,8 @@ EOF } ``` -To have the `CLOUDFLARE_API_TOKEN` environment variable set in the environment, before Terragrunt invokes -OpenTofu/Terraform, you'll want to make sure that your `terraform` block in the `terragrunt.hcl` file looks something -like the following: +To ensure the `CLOUDFLARE_API_TOKEN` environment variable is set in the environment before Terragrunt invokes OpenTofu/Terraform, configure the `terraform` block in your `terragrunt.hcl` file as follows: + ```hcl terraform { @@ -164,56 +136,55 @@ terraform { } } ``` +### Managing secrets -### Managing Secrets - -Now that you have the configurations set for the provider, you'll want to make sure that the secrets are placed in a -secure and convenient location. There are many ways to store secrets, and trade-offs inherent in each method. +When configuring providers and Pipelines, it’s important to store secrets in a secure and accessible location. Several options are available for managing secrets, each with its advantages and trade-offs. #### GitHub Secrets -This is the simplest way to store secrets, and is accessible by default when working with GitHub Actions. You can -follow GitHub documentation on [using secrets in GitHub Actions](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions) -to learn more about how to store and use secrets in GitHub Actions. +GitHub Secrets is the simplest option for storing secrets and is natively supported in GitHub Actions. Refer to GitHub’s [documentation on using secrets in GitHub Actions](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions) for guidance on setting and using secrets. -The trade-offs of using GitHub Secrets for secrets management in Pipelines is that workflows have to be edited to access -them, and that there isn't granular authorization for accessing those secrets. This means that the secrets are available -when running any infrastructure update. +**Advantages**: +- Easy to configure and use within GitHub Actions workflows. +- No additional infrastructure or external services required. -#### AWS Secrets Manager +**Trade-offs**: +- Secrets are available to all workflows without granular authorization. +- Editing workflows may be required to access these secrets securely. -This is a more sophisticated way to store secrets, and requires that you provision requisite secrets in AWS, then -configure the necessary permissions to access those secrets by the role used by Pipelines to interact with them. +#### AWS Secrets Manager -Permission can be granted at a fairly granular level so that access is only granted to the secrets when necessary. +AWS Secrets Manager offers a sophisticated solution for managing secrets. It allows for provisioning secrets in AWS and configuring fine-grained access controls through AWS IAM. It also supports advanced features like secret rotation and access auditing. -Secrets manager also has fairly sophisticated mechanisms for rotating secrets, and for auditing access to secrets. +**Advantages**: +- Granular access permissions, ensuring secrets are only accessible when required. +- Support for automated secret rotation and detailed access auditing. -The trade-offs for using AWS Secrets Manager are that it requires added complexity in secrets management, and that -there can be significant costs associated with using Secrets Manager. +**Trade-offs**: +- Increased complexity in setup and management. +- Potentially higher costs associated with its use. -For more on how to use AWS Secrets Manager, you can refer to the [AWS Secrets Manager documentation](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html). +Refer to the [AWS Secrets Manager documentation](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) for further details. #### AWS SSM Parameter Store -This is another secret store provided by AWS, and is a simpler and cheaper alternative to Secrets Manager. +AWS SSM Parameter Store is a simpler and more cost-effective alternative to Secrets Manager. It supports secret storage and access control through AWS IAM, providing a basic solution for managing sensitive data. -Permissions are granted in a similar fashion to Secrets Manager (via AWS IAM), and can be granted at a fairly -granular level. +**Advantages**: +- Lower cost compared to Secrets Manager. +- Granular access control similar to Secrets Manager. -The trade-offs for using AWS SSM Parameter Store are that it is less sophisticated than Secrets Manager, and that -it has less built-in support for rotating secrets, etc. +**Trade-offs**: +- Limited functionality compared to Secrets Manager, such as less robust secret rotation capabilities. -For more on how to use AWS SSM Parameter Store, you can refer to the [AWS SSM Parameter Store documentation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html). +Refer to the [AWS SSM Parameter Store documentation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) for additional information. -#### Deciding on a Secret Store +#### Deciding on a secret store -When deciding on a secret store, you should prioritize considering at least the following: +When selecting a secret store, consider the following key factors: -1. **Cost**: How much does it cost to store secrets in the secret store? -2. **Complexity**: How complex is it to manage secrets in the secret store? -3. **Granularity**: How granular can you get with permissions for accessing secrets? +1. **Cost**: Evaluate the financial implications of using a particular secret store. +2. **Complexity**: Assess how straightforward it is to set up and manage secrets. +3. **Granularity**: Determine the level of access control the store offers. -There are many more considerations that have to be evaluated based on the needs of your organization. Make sure you -have a clear understanding of what you need from a secret store before deciding on one, and make sure you coordinate -with all relevant stakeholders to ensure that the secret store you choose meets the needs of your organization. \ No newline at end of file +Choose a secret store that aligns with your organization’s security, operational, and budgetary requirements. Collaborate with relevant stakeholders to ensure the selected option meets your organizational needs effectively. diff --git a/docs/2.0/docs/pipelines/guides/running-drift-detection.md b/docs/2.0/docs/pipelines/guides/running-drift-detection.md index b50504289..d1b1300c3 100644 --- a/docs/2.0/docs/pipelines/guides/running-drift-detection.md +++ b/docs/2.0/docs/pipelines/guides/running-drift-detection.md @@ -2,42 +2,43 @@ ## Detecting Drift -Pipelines Drift Detection can be run on a manually or on a schedule. +Pipelines Drift Detection can be executed manually or on a scheduled basis. :::note -We recommend starting manually and running Drift Detection against each directory of your IaC before enabling scheduled Drift Detection on your entire repository. This allows you to fix any existing drift on a smaller set of units at a time. +It is recommended to start with manual runs, focusing on individual directories of your IaC. This approach allows you to resolve drift incrementally before enabling scheduled Drift Detection for the entire repository. ::: ### Running manually -Pipelines Drift Detection can be run manually by navigating to Actions in your GitHub repository, selecting Pipelines Drift Detection from the left hand menu, and then selecting Run Workflow. +You can manually initiate Pipelines Drift Detection by navigating to the Actions tab in your GitHub repository, selecting "Pipelines Drift Detection" from the left-hand menu, and then clicking "Run Workflow." -By default the workflow will run on all units in your repository, and create a pull request on the branch `drift-detection`. You can specify a path filter to restrict Drift Detection to a subset of your units and customize the branch name. For example to to run Drift Detection only on IaC in the `management` directory, the filter should be `./management/*`. Note the leading `./`. +By default, the workflow evaluates all units in your repository and generates a pull request on the `drift-detection` branch. To limit drift detection to specific units, specify a path filter. For instance, to target only the `management` directory, use the filter `./management/*` (note the leading `./`). ![Manual Dispatch](/img/pipelines/maintain/drift-detection-manual-dispatch.png) ### Running on a schedule -To enable running on a schedule: +To enable scheduled runs: -1. Uncomment the schedule block containing `- cron: '15 12 * * 1'` in `.github/workflows/pipelines-drift-detection.yml`. -1. Update the cron schedule to suit your desired frequency. The default schedule runs at 12:15UTC Monday. You can increase or decrease the frequency that the schedule runs using [crontab syntax](https://crontab.guru/#15_12_*_*_1). -1. Each time Drift Detection runs and detects drift it will open a Pull Request in your repository. If there is an existing Drift Detection Pull Request that has not been merged it will be replaced. +1. Uncomment the `schedule` block in `.github/workflows/pipelines-drift-detection.yml` that contains `- cron: '15 12 * * 1'`. +2. Adjust the cron schedule to reflect your preferred frequency. The default configuration runs at 12:15 UTC every Monday. Use [crontab syntax](https://crontab.guru/#15_12_*_*_1) to customize the timing. +3. Each Drift Detection run creates a pull request in your repository. If an existing Drift Detection pull request remains unmerged, it will be updated or replaced. :::caution -Running Pipelines Drift Detection too frequently can easily eat through your GitHub Action minutes. We recommend starting with a low frequency and increasing only when you are comfortable with the usage. +Running Drift Detection too frequently can consume a significant number of GitHub Action minutes. Begin with a lower frequency and adjust as needed based on your usage patterns. ::: ## Resolving Drift -Drift can be resolved by either applying the committed IaC from your repository, or modifying modules until they reflect the infrastructure state in the cloud. +Drift can be addressed by either applying the current IaC configuration from your repository or modifying the modules to match the infrastructure state in the cloud. -### Merging The Pull Request +### Merging the pull request -Merging the Pull Request will trigger a `terragrunt apply` on the drifted modules. +Merging the pull request triggers a `terragrunt apply` on the modules identified as having drift. -### Updating Units +### Updating units -You can make modifications to modules that have drifted and commit those changes to the Drift Detection branch. Each change to a terragrunt unit change will re-trigger `terragrunt plan` in those units on the Pull Request, and you can inspect the plan to ensure that the unit no longer has drift. +Alternatively, modify the drifted modules to align them with the desired state and commit the changes to the drift-detection branch. Each change triggers a new `terragrunt plan` for the affected units, which you can review to ensure the drift is resolved. + +When the pull request is merged, Pipelines will execute `terragrunt apply` on all drifted or modified units. If a unit no longer exhibits drift, the apply operation will result in no changes being made to the infrastructure. -When the Pull Request is merged, Pipelines will run `terragrunt apply` on all the units that had drift detected **or** were modified in the Pull Request. If the unit no longer has drift the apply will be a no-op and no infrastructure changes will be made. diff --git a/docs/2.0/docs/pipelines/guides/running-plan-apply.md b/docs/2.0/docs/pipelines/guides/running-plan-apply.md index 099c34e43..7601104e3 100644 --- a/docs/2.0/docs/pipelines/guides/running-plan-apply.md +++ b/docs/2.0/docs/pipelines/guides/running-plan-apply.md @@ -1,28 +1,29 @@ # Running Plan/Apply with Pipelines -When changes are made to your committed IaC, Pipelines detects these infrastructure changes and runs Terragrunt Plan/Apply on your units. Changes that occur in commits that are included in Pull Requests targeting your [Deploy Branch](/2.0/reference/pipelines/configurations-as-code/api#deploy_branch_name) (e.g. `main` or `master`) will trigger Terragrunt **Plan**. Changes in commits _on_ your [Deploy Branch](/2.0/reference/pipelines/configurations-as-code/api#deploy_branch_name) will trigger a Terragrunt **Apply**. +Pipelines automatically detects infrastructure changes in your committed IaC and runs Terragrunt Plan or Apply actions on your units. Infrastructure changes in pull request commits targeting [Deploy Branch](/2.0/reference/pipelines/configurations-as-code/api#deploy_branch_name) (e.g., `main` or `master`) will trigger Terragrunt **Plan**. Changes in commits directly on the [Deploy Branch](/2.0/reference/pipelines/configurations-as-code/api#deploy_branch_name) will trigger Terragrunt **Apply**. -The recommended workflow when working with Pipelines is to create a new Pull Request with the desired changes, then review the output of Terragrunt Plan to confirm that the resulting infrastructure changes are expected. We recommend enforcing [Branch Protection](/2.0/docs/pipelines/installation/branch-protection/#recommended-settings) and especially the [Require branches to be up to date](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-protected-branches/about-protected-branches#require-status-checks-before-merging) status check on your repository as this will restrict the PR from being merged if the Plan may be out of date. +The preferred workflow when working with Pipelines involves creating a new Pull Request with the desired changes, then reviewing the Terragrunt Plan output to ensure the infrastructure changes align with expectations. It is advisable to enforce [Branch Protection](/2.0/docs/pipelines/installation/branch-protection/#recommended-settings), particularly the [Require branches to be up to date](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-protected-branches/about-protected-branches#require-status-checks-before-merging) status check. This ensures the PR cannot be merged if the Plan is outdated. -## Running Plan +## Running plan -To trigger a **Plan**, create an infrastructure change such as adding or modify a `terragrunt.hcl` unit on a new branch, then open a new Pull Request to merge this branch into your Deploy Branch. Once the Pull Request is open Pipelines will add a comment to the PR including the output of the Plan. +To trigger a **Plan**, create an infrastructure change, such as adding or modifying a `terragrunt.hcl` unit, on a new branch. Then, open a new Pull Request to merge this branch into your Deploy Branch. After merging, Pipelines will comment on the pull request with the Apply output. ![Screenshot of Plan Comment](/img/pipelines/guides/plan-comment.png) -## Running Apply +## Running apply -To run an **Apply**, merge your changes into the Deploy Branch. All commits including Merge commits on the Deploy Branch will trigger an apply if infrastructure changes are detected. +To initiate an **Apply**, merge your changes into the Deploy Branch. Any commits, including merge commits on the Deploy Branch, will trigger an Apply if infrastructure changes are detected. -Pipelines will add a comment to the (merged) Pull Request with the output of the Apply. +Pipelines will add a comment to the merged Pull Request containing the Apply output. -## Skipping Pipelines Plan/Apply +## Skipping Pipelines plan/apply -You may occasionally need to skip Pipelines on particular commits. This can be done by adding one of the [workflow skip messages](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/skipping-workflow-runs) such as `[no ci]` into your commit message. +In certain scenarios, it may be necessary to skip Pipelines for specific commits. To do this, include one of the [workflow skip messages](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/skipping-workflow-runs), such as `[no ci]`, in the commit message. -You can also modify the `paths-ignore` filter in `.github/workflows/pipelines.yml` within your repository to exclude an entire directory from triggering Pipelines. +Alternatively, adjust the `paths-ignore` filter in `.github/workflows/pipelines.yml` to prevent specific directories from triggering Pipelines. + +For example, to exclude a directory named `local-testing`, update the workflow configuration as follows: -For example, to exclude a directory with the name `local-testing` you would modify the workflow ```hcl title=".github/workflows/pipelines.yml" on: push: @@ -40,7 +41,7 @@ on: paths-ignore: - "local-testing/**" ``` +## Destroying infrastructure -## Destroying Infrastructure +To destroy infrastructure, create a commit that removes the relevant Terragrunt unit. Pipelines will detect the deletion and trigger Terragrunt to execute a `plan -destroy` on Pull Requests or a `destroy` on the Deploy Branch. Pipelines automatically retrieves the previous committed version of the infrastructure, enabling Terragrunt to run in the directory that has been deleted. -To **Destroy** infrastructure create a commit deleting the Terragrunt unit. Pipelines will detect the deletion and trigger Terragrunt to run a `plan -destroy` on pull requests or `destroy` on your Deploy Branch. Pipelines automatically checks out the previous committed version of the infrastructure so that Terragrunt can run in the (now deleted) directory. diff --git a/docs/2.0/docs/pipelines/guides/setup-delegated-repo.mdx b/docs/2.0/docs/pipelines/guides/setup-delegated-repo.mdx index 52d34a0b7..d6a42658b 100644 --- a/docs/2.0/docs/pipelines/guides/setup-delegated-repo.mdx +++ b/docs/2.0/docs/pipelines/guides/setup-delegated-repo.mdx @@ -5,55 +5,57 @@ import CustomizableValue from '/src/components/CustomizableValue' :::note [Automatic vending of delegated repositories by Account Factory](/2.0/docs/accountfactory/guides/delegated-repositories.md) is an Enterprise-only feature. -If you are an Enterprise customer, Account Factory will automatically provision delegated repositories for you, and you may not need to follow the steps in this guide. The steps in this guide are for customers who are looking to manually set up delegated repositories, or for customers who are looking to understand how the process works from the perspective of Pipelines. +If you are an Enterprise customer, Account Factory will automatically provision delegated repositories for you, and you may not need to follow the steps in this guide. This guide is intended for customers who want to manually set up delegated repositories or understand how the process operates within Pipelines. ::: ## Introduction -Infrastructure management delegation is a first-class concept in DevOps Foundations. To learn more about delegated repositories, click [here](/2.0/docs/accountfactory/architecture/#delegated-repositories). +Infrastructure management delegation is a key feature in DevOps Foundations. To learn more about delegated repositories, click [here](/2.0/docs/accountfactory/architecture/#delegated-repositories). -Reasons you might want to delegate management of infrastructure includes: +Delegating infrastructure management might be necessary for reasons such as: -- A different team is autonomously working on parts of infrastructure relevant to a specific account. -- A GitHub Actions workflow in a repository needs to be able to make limited changes to infrastructure in a specific account. +- Allowing a separate team to independently manage infrastructure relevant to a specific account. +- Enabling a GitHub Actions workflow in a repository to make restricted changes to infrastructure in a specific account. - e.g. A repository has application code relevant to a container image that needs to be built and pushed to AWS ECR before it can be used in a Kubernetes cluster via a new deployment. + For example, a repository with application code may need to build and push a container image to AWS ECR before deploying it to a Kubernetes cluster. -The following guide assumes that you have already gone through [Pipelines Setup & Installation](/2.0/docs/pipelines/installation/prerequisites/awslandingzone.md). +The following guide assumes you have completed the [Pipelines Setup & Installation](/2.0/docs/pipelines/installation/prerequisites/awslandingzone.md). -## Step 1 - Ensure the delegated account is set up +## Step 1 - Verify the delegated account setup -Ensure that the account you want to delegate management for is set up. This includes the following: +Ensure the target account is prepared for delegation with the following: 1. The account is created in AWS. -2. An OIDC provider is set up in the account. -3. The account has the following roles provisioned: - - `infrastructure-live-access-control-plan` - - `infrastructure-live-access-control-apply` +2. An OIDC provider is configured in the account. +3. The account includes the following roles: + - `infrastructure-live-access-control-plan` + - `infrastructure-live-access-control-apply` -If the account was provisioned normally using Account Factory, these roles should already be set up. +These roles should already exist if the account was provisioned through Account Factory. -If you want more information about exactly how this works, read [GitHub OIDC docs](https://docs.github.com/en/actions/security-for-github-actions/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services). +For more details, refer to [GitHub OIDC documentation](https://docs.github.com/en/actions/security-for-github-actions/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services). -## Step 2 - Ensure that the `infrastructure-live-access-control` repository is provisioned. +## Step 2 - Confirm the `infrastructure-live-access-control` repository setup -The [infrastructure-live-access-control](/2.0/docs/pipelines/architecture/security-controls.md#infrastructure-access-control) repository is an optionally provisioned part of DevOps Foundations, and it's the recommended way of delegating access to infrastructure. +The [infrastructure-live-access-control](/2.0/docs/pipelines/architecture/security-controls.md#infrastructure-access-control) repository is an optional but recommended component of DevOps Foundations for delegating access to infrastructure. -If you don't have this repository set up, you can follow the steps in the [infrastructure-live-root-template](https://github.com/gruntwork-io/infrastructure-live-root-template) to provision it. +If this repository is not already set up, you can provision it using the steps in the [infrastructure-live-root-template](https://github.com/gruntwork-io/infrastructure-live-root-template). -This repository will be where you manage the IAM access that your delegated repository will have. +This repository will serve as the control point for managing IAM access for your delegated repository. ## Step 3 - Provision the delegated role -To provision a role that can be assumed by the delegated repository, you will want to add it to the `infrastructure-live-access-control` repository. +To create a role for the delegated repository, add it to the `infrastructure-live-access-control` repository. :::tip -Typically, CI roles created for Pipelines are created in pairs, one for the `plan` stage and one for the `apply` stage. This is because the `plan` stage should have more limited permissions than the `apply` stage, as plans typically only need read-only access. +CI roles for Pipelines are typically created in pairs: one for the `plan` stage and another for the `apply` stage. This structure limits permissions, granting read-only access during the `plan` stage. -If you are creating a role to do something like push a container image to ECR on push to the repository, you may only need a single role. +For tasks such as pushing a container image to ECR, you might only need a single role. ::: -Use Terragrunt Scaffold to create the new role in your `infrastructure-live-access-control` repository. +Use Terragrunt Scaffold to create the new role in the `infrastructure-live-access-control` repository. + + ```bash # Assuming your `infrastructure-live-access-control` repository is named exactly that, @@ -63,14 +65,14 @@ cd acme/_global/ecr-push-role terragrunt scaffold 'git@github.com:gruntwork-io/terraform-aws-security.git//modules/github-actions-iam-role?ref=v0.73.2' ``` -This will give you a placeholder `terragrunt.hcl` file for a new role in your repository that you can customize to your needs. +This will create a placeholder `terragrunt.hcl` file for a new role in your repository, which you can modify to suit your specific requirements. -Alternatively, you can copy and paste the following: +Alternatively, you can use the example configuration below: :::note -Note the value of `allowed_sources`, which should be the organization, name, and ref of the repository you are delegating to. +Pay attention to the `allowed_sources` value. This field should specify the organization, name, and ref of the repository being delegated to. -If you would like to make it so that all refs in a repository can assume this role, you can use `["*"]` as the value on the right hand side. +If you want to allow all refs in a repository to assume this role, you can set the value to `["*"]`. ::: ```hcl @@ -83,8 +85,9 @@ include "root" { path = find_in_parent_folders() } -# Include the component configuration, which has settings that are common for the component across all environments -include "envcommon" { +# Incorporate the component configuration. This includes settings that are shared across all environments for the component. +include "envcommon" + { path = "${dirname(find_in_parent_folders("common.hcl"))}/_envcommon/landingzone/delegated-pipelines-plan-role.hcl" merge_strategy = "deep" } @@ -94,16 +97,16 @@ inputs = { github_actions_openid_connect_provider_url = "https://token.actions.githubusercontent.com" # ---------------------------------------------------------------------------------------------------------------- - # This is the map of repositories to refs that are allowed to assume this role. + # This defines the map of repositories to refs that are permitted to assume this role. # - # Note that for a plan role, typically the only additional permissions that are required are read permissions that - # grant Terragrunt permission to read the existing state in provisioned infrastructure, such that a plan of proposed - # updates can be generated. + # For a plan role, additional permissions are generally limited to read access, enabling Terragrunt + # to access the existing state of provisioned infrastructure. This ensures that a plan of proposed updates can be generated. # - # Also note that all refs are allowed to assume this role, as the plan role is typically assumed in refs used - # as sources for pull requests. Assign permissions keeping this in mind. + # Note that all refs are permitted to assume this role since plan roles are typically assumed by refs + # used as sources for pull requests. Ensure permissions are assigned with this in mind. # - # Read more on least privilege below. + # Refer to the documentation on least privilege for further details. + # ---------------------------------------------------------------------------------------------------------------- allowed_sources = { @@ -111,20 +114,20 @@ inputs = { } # ---------------------------------------------------------------------------------------------------------------- - # Least privilege is an important best practice, but can be a very difficult practice to engage in. + # Least privilege is a critical best practice but can be challenging to implement effectively. # - # The `envcommon` include above provides the minimal permissions required to interact with TF state, however - # any further permissions are up to the user to define as needed for a given workflow. + # The `envcommon` include above provides the foundational permissions necessary to interact with Terraform state. + # Additional permissions, however, must be defined by the user based on the specific needs of a given workflow. # - # These permissions are meant to be continuously refined in a process of iteratively granting additional permissions - # as needed to have workflows updated in CI correctly, and then removing excess permissions through continuous review. + # The permissions should be continuously refined by iteratively granting additional access as workflows in CI evolve + # and then removing excess permissions through regular reviews. # - # A common pattern used to refine permissions is to run a pipeline with a best guess at the permissions required, or - # no permissions at all, and then review access denied errors and add the necessary permissions to have the pipeline - # run successfully. + # A typical approach to refining permissions involves running a pipeline with an initial guess of required permissions + # (or none at all), reviewing any access denied errors, and then adding only the permissions necessary to enable + # successful execution of the pipeline. # - # As workload patterns become more commonplace, this repo will serve as a reference for the permissions required to - # run similar workloads going forward. + # Over time, as workload patterns stabilize, this repository will serve as a reference for permissions needed to + # support similar workflows, streamlining the process for future updates. # ---------------------------------------------------------------------------------------------------------------- iam_policy = { @@ -133,11 +136,11 @@ inputs = { } ``` -Note the `envcommon` include, which includes the common minimal configurations recommended for delegated roles in DevOps Foundations. +Take note of the `envcommon` include, which incorporates the recommended baseline configurations for delegated roles within DevOps Foundations. -You will likely need to expand the `iam_policy` block to include the permissions required for your specific workflow. +You will probably need to extend the `iam_policy` block to define permissions tailored to your specific workflow requirements. -For example, if you would like permissions to push to ECR, you might add the following: +For instance, if you require permissions to push to ECR, you might include the following: ```hcl iam_policy = { @@ -163,7 +166,8 @@ iam_policy = { ## Step 4 - Apply the role -Once you have customized the role to your needs, you can apply it by creating a pull request in the `infrastructure-live-access-control` repository. +Once you’ve customized the role configuration, create a pull request in the `infrastructure-live-access-control` repository. Review and approval of the pull request will ensure the role is applied to the AWS account. + ```bash git add . @@ -172,11 +176,11 @@ git push gh pr create --base main --title "feat: Add ECR push role for acme account" --body "This PR adds the ECR push role for the acme account." ``` -Inspect the pull request, verify the plan, then merge the pull request to get it applied. +Inspect the pull request thoroughly, review the associated plan output to confirm the role configuration aligns with your requirements, and merge the pull request to apply the changes. -## Step 5 - Set up the delegated repository +## Step 5 - Configure the delegated repository -Depending on what the repository needs to do in CI, your GitHub Actions workflow may be as simple as a file like the following placed in `.github/workflows/ci.yml`: +The configuration of the delegated repository depends on the specific tasks it needs to perform during CI/CD workflows. For basic setups, the GitHub Actions workflow can include a file like the following placed in `.github/workflows/ci.yml`: ```yaml name: CI diff --git a/docs/2.0/docs/pipelines/guides/terragrunt-env-vars.md b/docs/2.0/docs/pipelines/guides/terragrunt-env-vars.md index c25cfdc42..d5d13bf3d 100644 --- a/docs/2.0/docs/pipelines/guides/terragrunt-env-vars.md +++ b/docs/2.0/docs/pipelines/guides/terragrunt-env-vars.md @@ -1,28 +1,27 @@ -# Leveraging advanced Terragrunt Features +# Leveraging Advanced Terragrunt Features ## Introduction -When Pipelines detects changes to IaC in your infrastructure repositories it will invoke `terragrunt` with a specific set of command line arguments for the detected change. For example for a change to a single unit in a pull request pipelines will `chdir` into the unit directory and invoke `terragrunt plan --terragrunt-non-interactive`. +When Pipelines detects changes to Infrastructure as Code (IaC) in your repositories, it invokes `terragrunt` with a predefined set of command-line arguments for the detected changes. For instance, if a single unit is modified in a pull request, Pipelines will `chdir` into the unit's directory and execute `terragrunt plan --terragrunt-non-interactive`. -You can inspect the specific command in different scenarios by viewing the logs for a Pipelines workflow run. +You can view the specific commands used in different scenarios by examining the logs of a Pipelines workflow run. -In some cases you may find that you need to pass additional options to terragrunt to meet your specific needs. All cli options for Terragrunt also have a corresponding Environment Variable that if populated will change Terragrunt behavior. +In some situations, you may need to provide additional options to `terragrunt` to accommodate specific requirements. Many Terragrunt CLI options can be controlled through environment variables, allowing for flexible customization of its behavior. -See the full list of available options in the Terragrunt documentation. +Refer to the complete list of available options in the [Terragrunt CLI documentation](https://terragrunt.gruntwork.io/docs/reference/cli-options/#cli-options). -## Adding Environment Variables +## Adding environment variables :::note -For security reasons GitHub workflows do not automatically pass environment variables from the workflows in your repository into the included workflows in Gruntwork repositories, and you will need to add them to the Pipelines configuration file for them to propagate to Terragrunt executions. +GitHub workflows do not automatically pass environment variables from your repository's workflows into those included from Gruntwork repositories. To propagate environment variables to Terragrunt executions, you must add them to the Pipelines configuration file. ::: -Pipelines can be configured to pass additional Environment Variables to Terragrunt via the [env configuration option](/2.0/reference/pipelines/configurations#env) in `.gruntwork/config.yml`. +You can configure Pipelines to pass additional environment variables to Terragrunt using the [env configuration option](/2.0/reference/pipelines/configurations#env) in `.gruntwork/config.yml`. -Each item in the env sequence corresponds to an Environment Variable name and value. +Each entry in the `env` sequence represents an environment variable name and its value. -For example you may want to add the flag `---terragrunt-strict-include` to your Pipelines Terragrunt runs. To do so you would set the environment variable `TERRAGRUNT_STRICT_INCLUDE` to `true` in your Pipelines configuration. +For example, to enable the `--terragrunt-strict-include` flag in your Terragrunt runs, set the environment variable `TERRAGRUNT_STRICT_INCLUDE` to `true` in the Pipelines configuration file. -E.g. ```yml title=".gruntwork/config.yml" pipelines: env: @@ -30,6 +29,6 @@ pipelines: value: true ``` -On the next workflow run you can inspect the workflow logs and look for an env: block on the action executing Terragrunt. If everything is configured correctly you will see your additional Environment Variable has been passed through to the action. +On the next workflow run, review the workflow logs and locate the `env:` block for the action that executes Terragrunt. If the configuration is correct, your additional environment variable will appear in the `env:` block, confirming it has been successfully passed to the action. ![Screenshot of additional Environment Variable](/img/pipelines/guides/custom-env-var.png) diff --git a/docs/2.0/docs/pipelines/guides/updating-pipelines.md b/docs/2.0/docs/pipelines/guides/updating-pipelines.md index 33523bd5c..1890b37b7 100644 --- a/docs/2.0/docs/pipelines/guides/updating-pipelines.md +++ b/docs/2.0/docs/pipelines/guides/updating-pipelines.md @@ -1,8 +1,8 @@ # Updating Your Pipeline -Staying up to date with the latest in Gruntwork Pipelines is fairly simple. We release new versions of the Pipelines CLI, the associated GitHub Actions Workflows and the underlying custom GitHub Actions regularly to provide the optimal experience for managing infrastructure changes at scale. +Keeping Gruntwork Pipelines updated is straightforward. Regular updates are released for the Pipelines CLI, associated GitHub Actions Workflows, and the custom GitHub Actions to ensure optimal performance and scalability for managing infrastructure changes. -To pull in the latest changes across all three of these dimensions, you can simply edit the `pipelines.yml` file found under `.github/workflows` in any repository integrated with Gruntwork Pipelines in order to select the latest version of the Pipelines GitHub Actions Workflow: +To apply the latest updates across these components, modify the `pipelines.yml` file located in the `.github/workflows` directory of any repository integrated with Gruntwork Pipelines. Update the file to reference the latest version of the Pipelines GitHub Actions Workflow: ```yml jobs: @@ -10,8 +10,11 @@ jobs: uses: gruntwork-io-team/pipelines-workflows/.github/workflows/pipelines-root.yml@v0.0.5 ``` -Due to our integration with [Dependabot](https://docs.github.com/en/code-security/getting-started/dependabot-quickstart-guide), you can also automatically receive pull requests that suggest updates to the `pipelines.yml` file in your repository by leveraging a `.github/dependabot.yml` file. This will help you stay up to date with the latest changes in Gruntwork Pipelines. DevOps Foundations customers receive this configuration as part of their `infrastructure-live` repositories by default. +Due to our integration with [Dependabot](https://docs.github.com/en/code-security/getting-started/dependabot-quickstart-guide), you can automatically receive pull requests suggesting updates to the `pipelines.yml` file in your repository by including a `.github/dependabot.yml` file. This ensures your repository stays aligned with the latest changes in Gruntwork Pipelines. DevOps Foundations customers receive this configuration as part of their `infrastructure-live` repositories by default. -## Updating Customized Workflows +## Updating customized workflows + +If you have customized workflows as outlined in [Extending Pipelines](/2.0/docs/pipelines/guides/extending-pipelines.md), maintaining updates to these workflows may require additional effort. For those who have forked the [pipelines-workflows](https://github.com/gruntwork-io/pipelines-workflows) repository to implement customizations, manual updates will be necessary to incorporate the latest changes from the upstream repository. + +To update your workflows, follow the instructions provided in the [GitHub documentation](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/syncing-a-fork). This process applies as long as there are no conflicts between your customizations and the upstream repository. -Note that if you follow the instructions under [Extending Pipelines](/2.0/docs/pipelines/guides/extending-pipelines.md), you may have incurred greater burden in maintaining updates to your customized workflows. If you decide to fork the [pipelines-workflows](https://github.com/gruntwork-io/pipelines-workflows) repository to customize your workflows, you will need to manually update your workflows to include the latest changes from the upstream repository. This can be done by following the instructions in the [GitHub documentation](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/syncing-a-fork), as long as you have not made changes that conflict with the upstream repository.