Serverless Jenkins - Scaling Jenkins to infinity and beyond - Part 1

Recognize the problems and put an end to Bad Service Management today.

Serverless Jenkins - Scaling Jenkins to infinity and beyond - Part 1

Jörg Herzinger

Jörg Herzinger

9 minutes read

Serverless Jenkins Master

Everybody loves Jenkins. In terms of automation, building and scheduling there is hardly a similar free and open source tool out there that can be extended that much with a powerful plugin system. There are even official Docker builds for Jenkins so we don't need to worry about this part. At this point I would like to take the time and say "Thank you soooooo much" to cloudbees for providing this incredible piece of software. For those of you wanting to jump right into the game, you can use our Terraform module here and run it in your own account.

We don't want to go into the details of installing and maintaining Jenkins here, because it's been covered a tremendous amount of times out there already, but one thing that every sysadmin out there know about it is, that it is using a simple file backend to store all of its configuration and data. There is no database or similar, a simple directory is enough for Jenkins to run. Some might see this as ingenious since there are no outside dependencies except for a HTTP proxy of some sorts and a few gigabytes of hard disk space. Others might argue, that running Jenkins serverlessly in Docker gets troublesome with this restriction. However, here we want to present a way to bring in the best of both worlds, by running Jenkins and its agents in AWS ECS Fargate thus minimizing maintenance and at the same time scaling Jenkins to the limits of your credit card (AWS will still bill you for the resources you use).

So here is the basic idea of what we are about to create.


We want all our resources and services to be secured in our own network and at the same time we want as many services provided by AWS as possible to minimize our maintenance efforts. So to store those Files Jenkins need we will be using EFS for which Fargate support was announced in April 2020.

Performance for EFS can be tuned and optimized but for Jenkins IO is usually not a problem so we won't worry about it at this point. Additionally this EFS share will be the only state we hold, so it will be necessary to take backups for production deployments for which there are several options which we will cover in a later blog entry.

So lets create our Jenkins Master in Fargate

resource "aws_ecs_task_definition" "jenkins" {
  family                = "jenkins"
  container_definitions = file("task-definitions/jenkins.json")
  task_role_arn         = aws_iam_role.ecs_task_provisioning.arn

  requires_compatibilities = ["FARGATE"]
  network_mode             = "awsvpc"
  memory = 1024 # 1 GB of RAM
  cpu    = 512 # 1 CPU

  volume {
    name = "jenkins-home"

    efs_volume_configuration {
      file_system_id     =
      transit_encryption = "ENABLED"

      authorization_config {
        access_point_id =
        iam = "ENABLED"

So that's all we need to have our Jenkins installations running with zero maintenance from now on. Jenkins will hold all of its data, plugins and configuration on the EFS share, so taking backups is really simple and the only thing we need to do from time to time. Furthermore we don't have to worry about stability or availability of the Docker hosting service because AWS already takes care of this for us.


Fargate is not just great because it reduces maintenance, it also reduces costs compared to conventional EC2 instances.

| Service | approx. monthly costs | | :-- | :-: | | Jenkins Master Fargate task (2 vCPU, 4 GB RAM) | 80 € | | AWS Application LoadBalancer service | 10 € | | EFS Storage (25 GB) | 10 € |

So depending on you requirements you can start running Jenkins for about 100€ per month and that is the total and final price, because there are no hidden maintenance fees and no downtimes for unforeseen networking or other problems.

Single Sign On Authentication

In a previous post we discussed running Jenkins serverlessly on AWS Fargate holding state only on an EFS share. One of the most crucial and difficult parts that hold state is user management. Considering security aspects this is especially important for build servers like Jenkins since it might hold sensitive information about the network infrastructure, production deployments and maybe even certain secrets like passwords. Managing access to such critical resources is thus a central aspect for enterprises. Here we will cover how to outsource this authentication to an OpenID provider like Okta.

We don't want to go into the details of OpenID Connect here, but it is ideal because most identity providers out there in the wild provide support for it, we can directly add it to the AWS LoadBalancer and there is a nice Jenkins Plugin for it. In Okta you can create a new OpenID Application in the admin interface and it will provide you with the necessary information. A crucial part here is that both sides here, the LoadBalancer/Jenkins and Okta, must have the exact same URLs configured, otherwise mutual trust is not possible.

OpenID authentication on an application load balancer

You might wonder why configure the load balancer to do the authentication if there is a plugin for Jenkins to do so. Well, you are right, this step is not strictly necessary, but in general it is a good idea to have two places of authentication and additionally this is a nice exercise that can be used somewhere else, where the is no application support for OpenID available.

Lets have a look at the Terraform part

resource "aws_lb_listener_rule" "jenkins" {
  listener_arn = aws_lb_listener.front_end.arn
  priority     = 99

  action {
    type = "authenticate-oidc"

    authenticate_oidc {
      authorization_endpoint = ""
      client_id              = "<get this from your openid provider>"
      client_secret          = "<get this from your openid provider>"
      issuer                 = ""
      token_endpoint         = ""
      user_info_endpoint     = ""

  action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.jenkins.arn

  condition {
    host_header {
      values = ["jenkins.bytesource.internal"]

The OIDC section is filled with information Okta generated for us and with the client id and secret the first step of mutual trust is given. The second part is, as mentioned, the login URL which usually is the URL of your Application LoadBalancer. Here we used our own internal domain for readability.


Every Application LoadBalancer in AWS comes with OpenID available at the context path "/oauth2/idpresponse". We also configured a hybrid approach for Okta so either Okta or Jenkins can initiate the login. If you want to allow access from browser bookmarks you will want to use this too. At this point we can test access to Jenkins from a private window and it should redirect us to Okta for authentication.

OpenID for Jenkins

Now that we have everything working from outside Jenkins we also want Jenkins to authenticate our users. Without this users will not be "known" to Jenkins which means you can see which user triggered a build. So lets head over to Jenkins and install the OpenID Connect Plugin that we need.

At Jenkins' Global Security Configuration we can add our OpenID provider. Just fill in the client id and secret and depending on your OpenID provider you might need to manually configure some settings. For Okta configure it like this


The "scopes" setting defines which information Okta should pass on to us. We want the users groups so Jenkins can be configured further with access restrictions depending on them. There is also a neat 'escape hatch' that allows us to configure a local backdoor administrator if things break down. Not to mention that this password should be a really strong one.

Now, lets refresh Jenkins in our browser and we should see our Name in the upper right corner.

That's it, so we achieved a zero maintenance Jenkins installation and we don't even have to worry about configuring users, not to mention their credentials. Of course this installation is hardly usable yet, because there are no build tools installed in our Jenkins Docker container. Using the Jenkins Master for builds of any kind is usually not a good idea anyway and will not scale, so we will look into running agents in just the same scaleable and zero maintenance way we did for Jenkins itself.

Next, head over to part 2 detailing how jobs can be run in Fargate containers and part 3 showing how to run jobs on actual EC2 instances.

Contact us

Cookies help us deliver our services. By using our services, you agree to our use of cookies.