EKS Pod identity for upbound aws provider
The native crossplane providers currently do not support EKS Pod Identity, which is a way to use IAM roles for service accounts (IRSA) without the need for a client ID and secret. The upbound providers, created using the upjet engine, do support EKS Pod Identity. In this blog post, we will show you how to use EKS Pod Identity with the upbound AWS provider for S3. the same concept can be used for other upbound providers, such as the upbound AWS provider for RDS, EC2 etc.
Terraform Configuration
INFO
To be able to use EKS Pod Identity the agent needs to be installed on the EKS Cluster.
The Service Accounts requires some IAM permissions to be able to access the AWS resources. The following Terraform snippet creates an IAM role with the necessary permissions for the service account to access S3.
resource "aws_iam_policy" "crossplane_s3_policy" {
name = "crossplane-eu-west-1-s3-policy"
description = "Policy for crossplane s3 controller to manage S3 buckets and objects"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"s3:*"
]
Resource = "*" # no resource can be specified because we need to be able create new buckets
}
]
})
}
data "aws_iam_policy_document" "assume_role" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["pods.eks.amazonaws.com"]
}
actions = [
"sts:AssumeRole",
"sts:TagSession"
]
}
}
resource "aws_iam_role_policy_attachment" "additional_policies" {
policy_arn = aws_iam_policy.crossplane_s3_policy.arn
role = aws_iam_role.iam_for_s3.name
}
resource "aws_iam_role" "iam_for_s3" {
name = "crossplane-eu-west-1-s3-Role"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
}
Afterward, the following Terraform snippet can be used to create the necessary aws_eks_pod_identity_association
resource. You'll need to provide the aws_eks_cluster.example.name
and aws_iam_role.example.arn
values, which are the name of your EKS cluster and the ARN of the IAM role you want to associate with the service account.
resource "aws_eks_pod_identity_association" "crossplane_s3" {
cluster_name = aws_eks_cluster.example.name
namespace = "crossplane-system"
service_account = "provider-aws-s3"
role_arn = aws_iam_role.example.arn
}
You can verify the association was created by running the following command:
aws eks list-pod-identity-associations --cluster-name $ClusterName
Crossplane Provider Configuration
WARNING
If you create the provider before the pod association is created, it will have to be restarted before the configuration will take effect.
First we must create the
Provider
for theprovider-family-aws
, which will provide the base resources (ProviderConfig
, ...) for all upbound AWS providers.yamlapiVersion: pkg.crossplane.io/v1 kind: Provider metadata: name: provider-family-aws namespace: crossplane-system spec: package: xpkg.upbound.io/upbound/provider-family-aws:v1
Create the
DeploymentRuntimeConfig
for the provider, which will specify the service account name to use. This service account should match the one used in the Terraform configuration above.yamlapiVersion: pkg.crossplane.io/v1beta1 kind: DeploymentRuntimeConfig metadata: name: provider-aws-pod-id-drc namespace: crossplane-system spec: serviceAccountTemplate: metadata: name: provider-aws-s3
Then we create the aws s3
Provider
&ProviderConfig
for the provider, which will reference theDeploymentRuntimeConfig
.yamlapiVersion: pkg.crossplane.io/v1 kind: Provider metadata: name: provider-aws-s3 namespace: crossplane-system spec: package: xpkg.upbound.io/upbound/provider-aws-s3:v1 runtimeConfigRef: name: provider-aws-pod-id-drc --- apiVersion: aws.upbound.io/v1beta1 kind: ProviderConfig metadata: name: provider-aws-s3 spec: credentials: source: PodIdentity
Object | purpose |
---|---|
DeploymentRuntimeConfig | Allows you to specify values for provider deployment (node‑selector, service‑account name, etc.). |
ProviderConfig | Supplies the credentials source (IRSA, Pod Identity, Secret, etc.) and is the object that every managed resource references via providerConfigRef . |
Provider | defines which provider package should be used |
Troubleshooting
Under Construction
The provider containers should host the AWS_CONTAINER_CREDENTIALS_FULL_URI
& AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE
environment variable, which is set by the Pod Identity Agent.
The crossplane provider container does not have a shell so you'll need a debug container to check the environment variables:
kubectl debug -it -n crossplane-system `
pod/provider-aws-s3-8691ce5b9d4b-d9f586758-2sncd `
--image=nicolaka/netshoot `
--target=package-runtime `
--share-processes `
-- /bin/bash