Menu Close
Menu Close

Want to know more about this blog?

Tom de Brouwer will happily answer all your questions.

Get in touch

Nov 29, 2017

At Merapar we regularly deploy Kubernetes on AWS and leverage IAM roles so our microservices can gain access to AWS services, such as DynamoDB, S3, Route53, etc. Because we want to follow the security principle of least privilege, we don’t want every microservice to have the same IAM permissions. This blog illustrates how fine grained AWS IAM roles at the pod or container level can be assigned with Kubernetes.

There are several possible approaches to managing credentials for access to AWS services:

Kubernetes Secrets seems to be ideal as it is all managed within Kubernetes, however because the secrets are only base64 encoded, you need to be careful when storing sensitive credentials. Base64 is as easily decoded as it is encoded! It is not the best place to store credentials for AWS services for example. Also bear in mind that statically configuring AWS user credentials would in most situations not meet security requirements for key rotation.

AWS IAM roles can help us here. What are IAM roles? Here is the AWS description:

“An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have standard long-term credentials (password or access keys) associated with it. Instead, if a user assumes a role, temporary security credentials are created dynamically and provided to the user.”

Obviously, you can assign a global IAM role to a Kubernetes node which is the union of all IAM roles required by all containers and pods running in the Kubernetes cluster. From a security standpoint, this is probably not an acceptable solution because a global IAM role can be inherited by all pods and containers running in the cluster.

Consider the following example: We can assign IAM roles to Kubernetes nodes, and give access for the Kubernetes Pods hosting the Storage Service access to the IAM role for AWS S3 Access. However, as it is assigned to the node, other services running on that node can also access the same roles and gain access, like the Web Service.

Default behavior when assigning IAM roles to Kubernetes nodes.
Default behavior when assigning IAM roles to Kubernetes nodes.

So instead of assigning IAM roles to nodes we should assign roles to Kubernetes Pods, and depending on the IAM Role linked to a Kubernetes Pods they have certain privileges, like read access DynamoDB or read/write access on a particular S3 bucket.

At this stage, Kubernetes does not natively support AWS IAM roles and permissions. Hence you need to give a special consideration to how you want to provide IAM permissions to your nodes, pods, and containers.

We found our solution in Kube2IAM (https://github.com/jtblin/kube2iam)

“Kube2IAM’s solution is to redirect the traffic that is going to the ec2 metadata API for docker containers to a container running on each instance, make a call to the AWS API to retrieve temporary credentials and return these to the caller. Other calls will be proxied to the ec2 metadata API. This container will need to run with host networking enabled so that it can call the ec2 metadata API itself.”

The results is that each service gets its own fine grained IAM role (or shares an IAM Role) with certain privileges.

There is a more privileged intermediate Kube2IAM role, which is used by Kube2IAM to assume the more fine grained roles and obtain temporary credentials for the pods. The key to assigning these fine grained roles is that IAM role for the S3 Bucket has a two-way trust relationship with the Kube2IAM role.

Kube2IAM in Kubernetes IAM role overview.
Kube2IAM in Kubernetes IAM role overview.

Lets look in more detail at the roles and how they link together…

S3 Bucket role & Policy

Here is the IAM Policy attached to the S3 Bucket role:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::S3_BUCKET_NAME"
]
},
]
}

And the trust relationships for this role:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTID:role/Kube2IAM-Role"
},
"Action": "sts:AssumeRole"
}
]
}

Kube2IAM role & Policy

The policy of the Kube2IAM role looks like:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::ACCOUNTID:role/S3-BUCKET-Role"
]
}
]
}

And the Trust relationships:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::ACCOUNTID:role/S3-BUCKET-Role"
]
},
"Action": "sts:AssumeRole"
},
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}

Note that the Kube2IAM role has a trust relationship with both the other fine grained roles that will be assumed and also with the EC2 instances. This way the Kube2IAM becomes the linking pin between Kubernetes Service role and the EC2 instance which gives access to the Amazon SDK endpoint used to obtain rotating credentials.

In order for Kube2IAM to assign a role to a specific pod, you add an iam.amazonaws.com/role annotation with the role that you want to assume.

Here is an example of a Kubernetes pod definition:

apiVersion: v1
kind: Pod
metadata:
name: aws-cli
labels:
name: aws-cli
annotations:
iam.amazonaws.com/role: arn:aws:iam::ACCOUNTID:role/S3-BUCKET-Role
spec:
containers:
- image: fstab/aws-cli
command:
- "/home/aws/aws/env/bin/aws"
- "s3"
- "ls"
- "some-bucket"
name: aws-cli

Restricting EC2 metadata API access

To avoid containers bypassing the Kube2IAM container and getting credentials for the privileged intermediate Kube2IAM role, the IP table policies must be adjusted.

To prevent containers from directly accessing the ec2 metadata API and gaining unwanted access to AWS resources, the traffic to 169.254.169.254 must be proxied for all docker containers.

The iptables config would look like this:

iptables \
--append PREROUTING \
--protocol tcp \
--destination 169.254.169.254 \
--dport 80 \
--in-interface docker0 \
--jump DNAT \
--table nat \
--to-destination `curl 169.254.169.254/latest/meta-data/local-ipv4`:8181

This rule can be added automatically by the Kube2IAM container. This is achieved by setting --iptables=true, setting the HOST_IP environment variable, and running the container in a privileged security context.

For the Kube2IAM container we use a Kubernetes DaemonSet, so they are always scheduled on the Kubernetes Nodes. We need an instance of the container on each node so that requests from the other pods on that node are intercepted.

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube2iam
labels:
app: kube2iam
spec:
template:
metadata:
labels:
name: kube2iam
spec:
hostNetwork: true
containers:
- image: jtblin/kube2iam:latest
name: kube2iam
args:
- "--iptables=true"
- "--host-ip=$(HOST_IP)"
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 8181
hostPort: 8181
name: http
securityContext:
privileged: true

More details can be found in the docs in the Kube2IAM repository.
https://github.com/jtblin/kube2iam

Conclusion

In summary it works like this, where the Kube2IAM acts as a kind of proxy, using the Kube2IAM role.

IAM role access management overview.
IAM role access management overview.

Kube2IAM is an awesome add-on for Kubernetes clusters running on AWS. It secures the access by granting containers IAM roles and enforcing their access privilege through role annotations.