terraform - Switch terraform 0.12.6 to 0.13.0 give me provider ["registry.terraform.io/-/null"] 是必需的,但它已被删除

标签 terraform terraform-provider-aws amazon-eks

我在远程 terraform-cloud 中管理状态
我已经下载并安装了最新的 terraform 0.13 CLI
然后我删除了 .terraform .
然后我跑了terraform init并且没有错误
然后我做了

➜ terraform apply -var-file env.auto.tfvars

Error: Provider configuration not present

To work with
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0] its
original provider configuration at provider["registry.terraform.io/-/null"] is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0],
after which you can remove the provider configuration again.

Releasing state lock. This may take a few moments...
这是模块/kubernetes/main.tf 的内容
###################################################################################
# EKS CLUSTER                                                                     #
#                                                                                 #
# This module contains configuration for EKS cluster running various applications #
###################################################################################

module "eks_label" {
  source      = "git::https://github.com/cloudposse/terraform-null-label.git?ref=master"
  namespace   = var.project
  environment = var.environment
  attributes  = [var.component]
  name        = "eks"
}


#
# Local computed variables
#
locals {
  names = {
    secretmanage_policy = "secretmanager-${var.environment}-policy"
  }
}

data "aws_eks_cluster" "cluster" {
  name = module.eks-cluster.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks-cluster.cluster_id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
  version                = "~> 1.9"
}

module "eks-cluster" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = module.eks_label.id
  cluster_version = var.cluster_version
  subnets         = var.subnets
  vpc_id          = var.vpc_id

  worker_groups = [
    {
      instance_type = var.cluster_node_type
      asg_max_size  = var.cluster_node_count
    }
  ]

  tags = var.tags
}

# Grant secretmanager access to all pods inside kubernetes cluster
# TODO:
# Adjust implementation so that the policy is template based and we only allow
# kubernetes access to a single key based on the environment.
# we should export key from modules/secrets and then grant only specific ARN access
# so that only production cluster is able to read production secrets but not dev or staging
# https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_identity-based-policies.html#permissions_grant-get-secret-value-to-one-secret
resource "aws_iam_policy" "secretmanager-policy" {
  name        = local.names.secretmanage_policy
  description = "allow to read secretmanager secrets ${var.environment}"
  policy      = file("modules/kubernetes/policies/secretmanager.json")
}

#
# Attache the policy to k8s worker role
#
resource "aws_iam_role_policy_attachment" "attach" {
  role       = module.eks-cluster.worker_iam_role_name
  policy_arn = aws_iam_policy.secretmanager-policy.arn
}

#
# Attache the S3 Policy to Workers
# So we can use aws commands inside pods easily if/when needed
#
resource "aws_iam_role_policy_attachment" "attach-s3" {
  role       = module.eks-cluster.worker_iam_role_name
  policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}

最佳答案

此修复的所有功劳都归于在 cloudposse slack channel 上提到这一点的人:

terraform state replace-provider -auto-approve -- -/null registry.terraform.io/hashicorp/null


这解决了我与此错误有关的问题,并解决了下一个错误。全部升级 terraform 上的版本。

关于terraform - Switch terraform 0.12.6 to 0.13.0 give me provider ["registry.terraform.io/-/null"] 是必需的,但它已被删除,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/63590836/

相关文章:

Terraform循环: for_each for tuple

amazon-web-services - 如何通过同一账户与AWS EKS通信?

kubernetes - AWS EKS 集群的 Kubernetes kubelet 日志在哪里?

google-cloud-platform - 使用 Terraformer\Terraform 克隆 GCP 项目

存储在 aws secret 管理器中的 Azure secret

terraform - 如何在阿里云的terraform语法上设置区域?

Terraform 在 Kinesis Firehose 动态分区上抛出 InvalidArgumentException Duplicate ProcessorParameter passed to ProcessingConfiguration

amazon-web-services - AWS API 网关集成和 Terraform

postgresql - 使用 Terraform 在 AWS RDS PostgreSQL 上启用 PostGIS

bash - 列出kubernetes节点的内存和cpu