Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to enable the RBAC on the cluster after creation. #611

Open
1 task done
Annavar-satish opened this issue Nov 27, 2024 · 2 comments
Open
1 task done

Unable to enable the RBAC on the cluster after creation. #611

Annavar-satish opened this issue Nov 27, 2024 · 2 comments
Labels
bug Something isn't working waiting-response

Comments

@Annavar-satish
Copy link

Annavar-satish commented Nov 27, 2024

Is there an existing issue for this?

  • I have searched the existing issues

Greenfield/Brownfield provisioning

brownfield

Terraform Version

1.9.8

Module Version

9.0.0

AzureRM Provider Version

3.117.0

Affected Resource(s)/Data Source(s)

module.aks.azurerm_kubernetes_cluster.main

Terraform Configuration Files

main.tf

module "aks" {
  source                                         = "Azure/aks/azurerm"
  version                                        = "9.0.0"
  cluster_name                                   = var.cluster_name
  resource_group_name                            = var.rg_name
  kubernetes_version                             = var.kubernetes_version
  agents_availability_zones                      = var.agents_availability_zones
  enable_auto_scaling                            = true
  auto_scaler_profile_max_node_provisioning_time = var.auto_scaler_profile_max_node_provisioning_time
  agents_max_count                               = var.agents_max_count
  agents_min_count                               = var.agents_min_count
  agents_size                                    = var.agents_size
  agents_pool_name                               = var.agents_pool_name
  agents_tags                                    = var.default_tags
  azure_policy_enabled                           = true
  net_profile_dns_service_ip                     = var.net_profile_dns_service_ip
  net_profile_service_cidr                       = var.net_profile_service_cidr
  automatic_channel_upgrade                      = var.automatic_channel_upgrade
  node_os_channel_upgrade                        = var.node_os_channel_upgrade
  vnet_subnet_id                                 = data.terraform_remote_state.vnet.outputs.xxxxxx_azure_1_vnet_subnets[0] #var.private_subnet_id
  log_analytics_workspace_enabled                = true
  prefix                                         = var.prefix
  tags                                           = var.default_tags
  rbac_aad_azure_rbac_enabled                    = true
 - rbac_aad                                       = false
 + rbac_aad                                       = true
 + role_based_access_control_enabled              = true
  oidc_issuer_enabled                            = true
  workload_identity_enabled                      = true
  temporary_name_for_rotation                    = var.temporary_name_for_rotation
}

version.tf

terraform {
  required_version = ">= 1.2"
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = ">= 3.11, < 4.0"
    }
  }
}

tfvars variables values

rg_name                                        = "terraform"
agents_size                                    = "Standard_D4s_v4"
kubernetes_version                             = "1.29"
automatic_channel_upgrade                      = "patch"
agents_availability_zones                      = ["1", "2", "3"]
pvt_subnet1_address_prefixes                   = "10.0.0.0/19"
auto_scaler_profile_max_node_provisioning_time = "10m"
agents_max_count                               = "3"
agents_min_count                               = "3"
cognyx_azure_1_identity                        = "XXXXXX-azure-1"
pvt_subnet1_name                               = "XXXXXX-azure-1-private"
agents_pool_name                               = "XXXXXXazure1"
net_profile_dns_service_ip                     = "xxx.x.x.xx"
net_profile_service_cidr                       = "xxx.x.x.x/xx"
prefix                                         = "XXXXXX"
node_os_channel_upgrade                        = "SecurityPatch"
location                                       = "francecentral"
cluster_name                                   = "XXXXXX-azure-1"
temporary_name_for_rotation                    = "tmpnodepool"

agents_labels = {
  "cluster" : "XXXXXX-azure-1"
}
default_tags           = {
        name  = "XXXXXX-1"
        owner = "XXXXXX"
}

Debug Output/Panic Output

terraform plan output

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
+/- create replacement and then destroy

Terraform will perform the following actions:

  # local_file.kubeconfig will be created
  + resource "local_file" "kubeconfig" {
      + content              = (sensitive value)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "/fullpath/xxxxxx-azure-test-kubeconfig.yaml"
      + id                   = (known after apply)
    }

  # module.aks.azurerm_kubernetes_cluster.main is tainted, so must be replaced
+/- resource "azurerm_kubernetes_cluster" "main" {
      ~ api_server_authorized_ip_ranges     = [] -> (known after apply)
      ~ current_kubernetes_version          = "1.29.9" -> (known after apply)
      - custom_ca_trust_certificates_base64 = [] -> null
      - enable_pod_security_policy          = false -> null
      ~ fqdn                                = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" -> (known after apply)
      - http_application_routing_enabled    = false -> null
      + http_application_routing_zone_name  = (known after apply)
      ~ id                                  = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" -> (known after apply)
      ~ kube_admin_config                   = (sensitive value)
      + kube_admin_config_raw               = (sensitive value)
      ~ kube_config                         = (sensitive value)
      ~ kube_config_raw                     = (sensitive value)
      - local_account_disabled              = false -> null
        name                                = "xxxxxx-azure-test"
      ~ node_resource_group                 = "XX_xxxxxxxxxxxxxxxxxxxxxxxxx_francecentral" -> (known after apply)
      ~ node_resource_group_id              = "/xxxxxxxxxxxx/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/xxxxxxxxxxxxx/XX_xxxxxxxxx_xxxxxx-xxxxx-xxxx_francecentral" -> (known after apply)
      ~ oidc_issuer_url                     = "https://xxxxxxxxxxxxx.xxx.xxxx-xxx.xxxxx.xxx/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/" -> (known after apply)
      - open_service_mesh_enabled           = false -> null
      ~ portal_fqdn                         = "xxxxxx-xxxx-xxxxxxxx.xxxxxx.xxx.xxxxxxxxxxxxx.xxxxxx.xx" -> (known after apply)
      + private_dns_zone_id                 = (known after apply)
      + private_fqdn                        = (known after apply)
      ~ role_based_access_control_enabled   = false -> true
        tags                                = {
            "name"  = "xxxxxx-xxxx"
            "owner" = "xxxxxx-xxxx"
        }
        # (21 unchanged attributes hidden)

      ~ api_server_access_profile (known after apply)

      ~ auto_scaler_profile (known after apply)
      - auto_scaler_profile {
          - balance_similar_node_groups      = false -> null
          - empty_bulk_delete_max            = "10" -> null
          - expander                         = "random" -> null
          - max_graceful_termination_sec     = "600" -> null
          - max_node_provisioning_time       = "15m" -> null
          - max_unready_nodes                = 3 -> null
          - max_unready_percentage           = 45 -> null
          - new_pod_scale_up_delay           = "0s" -> null
          - scale_down_delay_after_add       = "10m" -> null
          - scale_down_delay_after_delete    = "10s" -> null
          - scale_down_delay_after_failure   = "3m" -> null
          - scale_down_unneeded              = "10m" -> null
          - scale_down_unready               = "20m" -> null
          - scale_down_utilization_threshold = "0.5" -> null
          - scan_interval                    = "10s" -> null
          - skip_nodes_with_local_storage    = false -> null
          - skip_nodes_with_system_pods      = true -> null
        }

      + azure_active_directory_role_based_access_control {
          + managed   = false
          + tenant_id = (known after apply)
        }

      ~ default_node_pool {
          - custom_ca_trust_enabled       = false -> null
          - fips_enabled                  = false -> null
          ~ kubelet_disk_type             = "OS" -> (known after apply)
          ~ max_pods                      = 110 -> (known after apply)
            name                          = "xxxxxxxxxx"
          ~ node_count                    = 3 -> (known after apply)
          ~ node_labels                   = {} -> (known after apply)
          - node_taints                   = [] -> null
          - only_critical_addons_enabled  = false -> null
          ~ orchestrator_version          = "1.29" -> (known after apply)
          ~ os_sku                        = "Ubuntu" -> (known after apply)
            tags                          = {
                "name"  = "xxxxxx-xxxx"
                "owner" = "xxxxxx-xxxx"
            }
          + workload_runtime              = (known after apply)
            # (22 unchanged attributes hidden)

          ~ upgrade_settings {
              - drain_timeout_in_minutes      = 0 -> null
                # (2 unchanged attributes hidden)
            }
        }

      ~ identity {
          - identity_ids = [] -> null
          ~ principal_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -> (known after apply)
          ~ tenant_id    = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -> (known after apply)
            # (1 unchanged attribute hidden)
        }

      ~ kubelet_identity (known after apply)
      - kubelet_identity {
          - client_id                 = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -> null
          - object_id                 = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -> null
          - user_assigned_identity_id = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/XX_xxxxxxxxx_xxxxxx-xxxxx-xxxx_francecentral/xxxxxxxxx/xxxxxxxxx.xxxxxxxxxxxxxxx/xxxxXxxxxxxxXxxxxxxxxx/xxxxxx-xxxxx-xxxx-xxxxxxxx" -> null
        }

      ~ network_profile {
          + docker_bridge_cidr      = (known after apply)
          + ebpf_data_plane         = (known after apply)
          ~ ip_versions             = [
              - "IPv4",
            ] -> (known after apply)
          ~ network_data_plane      = "azure" -> (known after apply)
          + network_mode            = (known after apply)
          + network_policy          = (known after apply)
          ~ outbound_ip_address_ids = [] -> (known after apply)
          ~ outbound_ip_prefix_ids  = [] -> (known after apply)
          ~ pod_cidr                = "10.244.0.0/16" -> (known after apply)
          ~ pod_cidrs               = [
              - "10.244.0.0/16",
            ] -> (known after apply)
          ~ service_cidrs           = [
              - "xxx.x.x.x/xx,
            ] -> (known after apply)
            # (6 unchanged attributes hidden)

          ~ load_balancer_profile (known after apply)
          - load_balancer_profile {
              - effective_outbound_ips      = [
                  - "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/XX_xxxxxxxxx_xxxxxx-xxxxx-xxxx_xxxxxxxxxxxxx/xxxxxxxxx/xxxxxxxx.xxxxxxx/xxxxxxxxxxxxxxxxx/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
                ] -> null
              - idle_timeout_in_minutes     = 0 -> null
              - managed_outbound_ip_count   = 1 -> null
              - managed_outbound_ipv6_count = 0 -> null
              - outbound_ip_address_ids     = [] -> null
              - outbound_ip_prefix_ids      = [] -> null
              - outbound_ports_allocated    = 0 -> null
            }

          ~ nat_gateway_profile (known after apply)
        }

      ~ oms_agent {
          - msi_auth_for_monitoring_enabled = false -> null
          ~ oms_agent_identity              = [
              - {
                  - client_id                 = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
                  - object_id                 = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
                  - user_assigned_identity_id = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/XX_xxxxxxxxx_xxxxxx-xxxxx-xxxx_xxxxxxxxxxxxx/xxxxxxxxx/xxxxxxxxx.xxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxxxx/xxxxxxxx-xxxxxx-xxxxx-xxxx"
                },
            ] -> (known after apply)
            # (1 unchanged attribute hidden)
        }

      ~ windows_profile (known after apply)
    }

Plan: 2 to add, 0 to change, 1 to destroy.

Changes to Outputs:
  ~ aks_id   = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/terraform/xxxxxxxxx/xxxxxxxxx.xxxxxxxxxxxxxxxx/xxxxxxxxxxxxxx/xxxxxx-xxxxx-xxxx" -> (known after apply)
  ~ password = (sensitive value)
  ~ username = (sensitive value)

Expected Behaviour

No response

Actual Behaviour

terraform apply error output

 Error: A resource with the ID "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/terraform/xxxxxxxxx/xxxxxxxxx.xxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxx/xxxxxx-xxxxx-xxxx" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_kubernetes_cluster" for more information.
│ 
│   with module.aks.azurerm_kubernetes_cluster.main,
│   on .terraform/modules/aks/main.tf line 17, in resource "azurerm_kubernetes_cluster" "main":
│   17: resource "azurerm_kubernetes_cluster" "main" {

Steps to Reproduce

No response

Important Factoids

No response

References

No response

@zioproto
Copy link
Collaborator

Hello,
can you give more context on:

module.aks.azurerm_kubernetes_cluster.main is tainted, so must be replaced

Did you have a failed terraform apply operation ?

@Annavar-satish
Copy link
Author

Hi @zioproto, I have attached the terraform apply output to the Debug Output/Panic Output planned output and I have attached the error I was facing while terraform apply in the' Actual Behavior' section.

The resource given in the error was already managed and created by Terraform. I was changing a configuration of the cluster previously the cluster does not enable the RBAC. when I tried to allow the rbac to the cluster it triggered the recreation of the cluster the thing is it was trying to create the replacement first and destroy it after the replacement was created and it was trying to create it with the same name. this was the issue.
terraform apply output error.

Error: A resource with the ID "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/terraform/providers/Microsoft.ContainerService/managedClusters/xxxxxx-azure-test" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_kubernetes_cluster" for more information.
│ 
│   with module.aks.azurerm_kubernetes_cluster.main,
│   on .terraform/modules/aks/main.tf line 17, in resource "azurerm_kubernetes_cluster" "main":
│   17: resource "azurerm_kubernetes_cluster" "main" {
│ 

I have tainted this resource
module.aks.azurerm_kubernetes_cluster.main is tainted, so must be replaced

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working waiting-response
Projects
Development

No branches or pull requests

2 participants