Software Engineering

Building a Kubernetes Cluster on AWS EKS using Terraform – Part III

Part III – configuring Security Groups

In the last article of the series, we built the networking infrastructure our cluster needs, including the VPC, Subnets, Route Tables and Gateways we need to make connections into the cluster possible. We put these changes into a separate module to make the overall project structure easier to understand.

Today we will configure the connections in our cluster on a finer level using Security Groups. Amazon Web Services uses said groups to define exactly which connections can reach a set of instances. Each group has it’s own set of ingress and egress rules, deciding what connections are allowed into and from within the group, similar to a firewall.

Making sense of our structure

We will use this article to set up some basic Security Groups to show how they work without setting up the actual resources we want to put into the groups – that would be way too much for a single article! But for the Security Group configurations to make sense, we need an overview of what we actually want to build. Let me put it into context:

This looks very complicated, but we already have half of it set up! The VPC and the subnets were already created in the last two articles. The application subnets will be the home for our primary resources: the EKS master and nodes. While the nodes can be deployed in different subnets to cover multiple Availability Zones, they will still share the same Security Group. The little locks illustrate those.

Creating Security Groups

Creating the Security Groups with Terraform is easy:

All that is needed for creation is a name and the id of our VPC. We also add an initial rule right away using the „egress“ tag. This tag defines a rule regarding connections coming from inside the group. Using zeros for ports and CIDR and -1 for the protocols means we allow everything – when our nodes or master need to connect to the outside for something, we let it through. Of course this can also be changed for more secure setups, but for now this is the easiest approach.

Adding rules to existing Security Groups

With the Security Groups set up the way they are now, we still have a problem: they allow no incoming connections at all. This means that the nodes can’t register at the master and the master can’t poll the nodes for information about our deployments. To fix this, we need to add additional rules. Let’s begin by making sure we can connect to our EKS master and nodes from the computer we use to start them up:

„cidr_block“ defines the IP (or a range of IPs) we want to allow to connect into the group. „security_group_id“ is the ID of the group the rule is added to.

The rules we create allow our IP into the master group if we use HTTPS on port 443, and into the node group if the use SSH on port 22. That means that we can connect to use tools like the Kubernetes Dashboard later, or check out the instances directly in case something goes wrong using our nodes.

Additionally, we want to make sure that the Master and Nodes can connect to each other to form a healthy Kubernetes cluster:

Instead of using „cidr_blocks“ we use „source_security_group_id“ for this set of rules. This means we allow any instance that is part of a certain group to connect into the other group, making it much easier to coordinate the network permissions for a multitude of groups and instances. We don’t need to define rules for IPs by hand at all! We just create the relevant instances as part of these Security Groups later.

Sharing information between modules

If we now put our newly generated code into a new module, there is one more thing we need to do. Our Security Group configuration requires the VPC ID which is generated in another module. We can’t just access information from another module directly, so we need to create an output in our first module:

This makes the module share that piece of information as a variable. You can then add the new module to our original modules.tf-file:

Don’t forget that you need to define the variables in each context either, like the „vpc_id“ in the new module and the „accessing_computer_ip“ in both the new module and the root module.

 Short and Sweet

While we didn’t create a lot of different resources today, we still configured a very important part of our AWS infrastructure. These exemplary groups and rules should help you understand Security Groups better, first and foremost. All too often, problems in AWS infrastructure stem from incorrectly configured Security Groups, so understanding them correctly is important. We will add further groups and rules for resources at later points.

In the next article, we will set up the actual EKS cluster: both the fully AWS-provided master and the EKS nodes, which are a little bit more tricky to configure – especially if you want to make them highly available.

You can check out the code in my GitHub-Repository for the article series – don’t forget to enter your values for the access keys and region in the .tfvars file and the state bucket configuration before running it!