Serverful jenkins agents
In the third part of this serverless Jenkins series (part 1, part 2) we will take a look at what can't be done with a serverless setup. As much benefits as serverless and Fargate brings, there are certain tasks that require a full server, building those Docker images for Fargate is just one example. We could of course fall back to simply creating a server and configure it to be a Jenkins agent, but that would leave us with those same problems of maintenance, security updates and missing scalability that we tried to avoid from the start. Jenkins' powerful plugin infrastructure again provides a solution for all our problems.
This plugin can start EC2 instances on demand for us whenever we need them and will also destroy them once they are not needed any more. For this to work we only need two additional resources, an SSH key and a Security group to allow the master to connect to the agent server.
resource "aws_key_pair" "jenkins_agents_autogen" {
key_name = "jenkins_agents_autogen"
public_key = "ssh-rsa <omitted>"
}
resource "aws_security_group" "jenkins_agent_ec2" {
vpc_id = module.vpc.vpc_id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = [aws_security_group.jenkins_master.id]
}
...
}
Once the Jenkins plugin is installed we can again navigate to Manage Jenkins → Manage Nodes and Clouds and add out EC2 cloud. For the basic settings we just give it a name, the private SSH key and we can define a maximum number of instances we want. Testing the connection should reveal a Success.
And similar to the Fargate agents we have to define AMIs that we want to start. The important parts here are the AMI ID, the instance type, the remote user used to log in to the machine and the label that will be used by Jobs to run on these instances.
Running a simple job on this new "ec2default" labeled node will now create a new server for us that will be automatically destroyed after a defined idle time.
The major difference between the EC2 and ECS/Fargate plugin is that EC2 instances will stay for as long as they are needed while Fargate Docker tasks are used only for the execution of one job and then destroyed again. This is mostly due to the longer startup times of EC2 instances which gives projects with many consecutive builds the chance to reuse nodes and thus save up on instance creation times. To make sure instances don't get cluttered with residuals it is possible to define a maximum number or reuses for an instance. This saves us the time consuming effort of writing clean up tasks.
Spot instances
Now, to cut the costs even more when using EC2 instances we could configure the plugin to request spot instances. For those unfamiliar with the AWS Spot system: With Spot AWS offers spare compute resources in their data centers at steep discounts compared to on demand pricing. The system is like a market where you offer a price to AWS and if they have free resources and your offer if good enough for them, then they will give it to your for as long as your price offer is higher than the current spot pricing.
In the advanced section of the AMI configuration we can enable Spot and define a max bid price. If we really need the instance we can enable a fallback to regular on-demand instances which is more expensive but gives us the certainty of the compute resources we need.
Wrap up
So, since this is the last part of serverless Jenkins CI/CD on AWS, lets wrap up about what we achieved with all of this.
- Jenkins is now running in Fargate with EFS as a storage backend which means that we don't have to care about resources, uptime or storage. Even upgrades of Jenkins are just a matter of a single variable in our Terraform repository.
- We outsourced authentication to an external OpenID provider so access and all permissions can be managed centrally in groups.
- Agents, the actual heroes in this story, do now scale horizontally and vertically in either EC2 or ECS Fargate and no team will ever have to wait for resources again.
- Build tool configurations on the agents are now stored in Docker Images or AMIs which forces us to do two things
- Keep secrets away from the agents or they would be exposed (Yes, that is actually a very good thing!)
- Keep configurations documented in GIT and have a build pipeline for the Docker images and AMIs
The actual costs for this setup highly depend on your settings and especially on how many builds you run over time. Arguably this easily beats every classic setup in terms of costs since agents only run for as long and when they are needed. If you want to make this even cheaper you can use Fargate spot tasks to save up to 70%. The only real scaling limitation here is in the size of your VPC. Since every task uses an IP address for the time it is running you are restricted to the amount of free IPs in your VPC that can run in parallel. Should you hit that limitation or have any further questions contact ByteSource.net to receive support from our certified professionals.