-
Notifications
You must be signed in to change notification settings - Fork 159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EKS Auto mode cannot be disabled #5105
Comments
The scope of this is even broader. Preview is broken if the
This seems to cause |
I have a hunch, that one of the issues is the use of So whenever one of I'm able to repro the same in Terraform using this. The crucial bit here is that the three different auto-mode settings need to either all be unknown, all be false, or all set to true. Upstream is not handling the unknown case correctly and interprets that as false. I'll cut a ticket with them. terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = "us-west-2"
}
resource "random_id" "example" {
byte_length = 8
}
locals {
# Use a conditional expression to create a computed boolean
auto_mode_enabled = random_id.example.dec % 2 == 0
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["us-west-2a", "us-west-2b"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24"]
}
resource "aws_eks_cluster" "example" {
name = "auto-mode-issues"
access_config {
authentication_mode = "API"
}
role_arn = aws_iam_role.cluster.arn
version = "1.31"
vpc_config {
subnet_ids = [
module.vpc.private_subnets[0],
module.vpc.private_subnets[1],
]
}
dynamic "compute_config" {
for_each = local.auto_mode_enabled ? [1] : []
content {
enabled = local.auto_mode_enabled
node_role_arn = aws_iam_role.node.arn
node_pools = ["general-purpose", "system"]
}
}
kubernetes_network_config {
elastic_load_balancing {
enabled = true
}
}
storage_config {
block_storage {
enabled = true
}
}
depends_on = [
aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.node_worker,
aws_iam_role_policy_attachment.node_ecr_pull,
]
}
resource "aws_iam_role" "cluster" {
name_prefix = "eks-cluster-example"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"sts:AssumeRole",
"sts:TagSession"
]
Effect = "Allow"
Principal = {
Service = "eks.amazonaws.com"
}
},
]
})
}
resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.cluster.name
}
resource "aws_iam_role" "node" {
name_prefix = "eks-cluster-node"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"sts:AssumeRole",
"sts:TagSession"
]
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
}
resource "aws_iam_role_policy_attachment" "node_worker" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodeMinimalPolicy"
role = aws_iam_role.node.name
}
resource "aws_iam_role_policy_attachment" "node_ecr_pull" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly"
role = aws_iam_role.node.name
}
|
I created an upstream fix for it: hashicorp/terraform-provider-aws#41155 |
Describe what happened
Upstream has an issue that prevents disabling EKS Auto Mode without replacing a cluster: hashicorp/terraform-provider-aws#40582
Disabling it fails with:
To work around this I'd recommend to disable auto mode manually (AWS CLI or Console) and then run pulumi refresh.
Sample program
Run
pulumi up
with the following program and then remove theautoMode
block before runningpulumi up
again.Log output
n/a
Affected Resource(s)
Output of
pulumi about
n/a
Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
The text was updated successfully, but these errors were encountered: