When I started to dig deeper into my previous master-slave Jenkins setup, I realized that my configuration needs more customizations. In the real-world scenario, we want to use our Jenkins to build or proceed with given tasks using special utilities. For example, we want to run the Ansible playbook as a scheduled Jenkins job. Unfortunately, it won’t work with the default slave image because the Ansible binaries are missing. Today, I want to show you how to fix it, and add more utilities to our Docker Jenkins-Slave image.

jnk1.png

First, let’s create a Dockerfile with our custom packages. The jenkins/jnlp-slave is build on top of Debain based, OpenJDK image. It means, that we can use “apt-get” to add more software. Because I want to have the latest available Ansible version (which is 2.7.5 in time of writing), I’m going to use pip utility to install it. Here is my modified Dockerfile, which I’m going to use:

FROM jenkins/jnlp-slave

USER root

RUN mkdir -p /etc/ansible && chmod 755 /etc/ansible

COPY ansible.cfg /etc/ansible/ansible.cfg 

RUN apt-get update && apt-get install python3-pip build-essential libssl-dev libffi-dev python-dev python3-venv sshpass -y && apt-get clean

RUN pip3 install ansible

USER jenkins

ENTRYPOINT ["jenkins-slave"]

I hope, most of these commands are self-explanatory for all of you. But just in case, I’ll describe them in details:

  1. I use “jenkins/jnlp-slave”, latest image as a base.
  2. Switch to user root (from parent image, default user is jenkins).
  3. I run the command to create an ansible configuration file as well as set the proper permissions.
  4. Next, I copy the ansible.cfg configuration file. It’s very simple, and the only thing it configures is to disable host_key_checking for the ansible. It’s important because otherwise, the Ansible will not work.
[defaults]
host_key_checking = False
  1. Time to install new packages. I install python3-pip and its dependencies, plus sshpass. Finally I clean the apt cache to reduce the size of our image.
  2. Using pip, I install the latest version of Ansible.
  3. Switch back to the jenkins user.
  4. Finally, I set the startup command back to jenkins-slave.

As always, the code is available at my official GitHub repository https://github.com/mycloudfun/kubernetes-jenkins. The docker image is available as mycloudfun/jnlp-slave1:0 in the docker hub. Once the image is built and uploaded to the hub.docker.com repository, we can start to configure our Jenkins to use this image.

Go to the Jenkinsh dashboard. First thing to configure it to install additional plugins. Go to Manage Jenkins → Manage Plugins → Available → search for the Ansible plugin and install it. Next, go back to Manage Jenkins → Global Tool Configuration → Click on the Add Ansible. Let’s name it as slaveAnsible, and set the Path to /usr/local/bin (this is the directory where pip has installed the Ansible).

jnk2.png

Time to configure our Kubernetes cloud to use the new image. Go to Manage Jenkins → Configure System → In the Cloud/Kubernetes/Images/Container Template section, set the Docker image to mycloudfun/jnlp-slave:1.0 (this is my newly built image). Make sure, that “Command to run” and “Arguments to pass to the command” are blank. For the test purpose, let’s name our container template to “jenkins-slave”.

jnk3.png

Let’s create a new job and see our ansible task in action! Create new Item →Freestyle project → in the Build section select Add build step and select Invoke Ansible Ad-Hoc Command.

jnk4.png

In the Ansible installation section, select our earlier configured “slaveAnsible”. We need a remote host with the SSH access to check our configuration. Because, I’m running my Kubernetes cluster in the Minikube which is installed on my Ubuntu Linux, I can use this system for the test purpose. You can select any system, which is available from your Kubernetes cluster and is running the SSH. Of course, you need an active credential. In Host pattern section, type the IP of your server. In the inventory section, select Inline content and repeat the address of your server. In the module section, type shell and the argument to execute “uname -a”. Before we can proceed, we need an active credential. Click on Add → Jenkins next to Credentials section. Make sure you configure these credentials in the Global credentials domain and the Kind is Username with the password. Also, the scope is Global.

jnk5.png

Ready to use job, should looks like:

jnk6.png

Save the job and run it. After a couple of minutes, when the Kubernetes will finish pulling the new image, our job will fail. What goes wrong? If we check the console output of our job, we will see a lot of Java exceptions.

jnk7.png

It’s really funny because I configured all in a good way (at least I thought so). The exception output doesn’t tell us much about the real problem. Sometimes, I received another one which told me that the Ansible binary, as well as the sshpass, doesn’t exist. Why? I have added them to my image! What is wrong? If you dig deeper into the logs, you can see that the POD which Jenkins starts to build our job consist of two containers.

jnk8.png

If you dig deeper, you will see that one container is indeed the one I created earlier. But one is “jenkins/jnlp-slave:alpine”.

jnk9.png

This one has environment variables with configured access to my master instance! It led me to the official documentation of the Jenkins Kubernetes plugin https://github.com/jenkinsci/kubernetes-plugin/blob/master/README.md

We can read there as follow in the constraints section:

WARNING If you want to provide your own Docker image for the JNLP slave, you must name the container jnlp so it overrides the default one. Failing to do so will result in two slaves trying to concurrently connect to the master.

That is the root cause! We need to name our template as “jnlp”, not “jenkins-slave” as we used originally. Let’s go back to configuration and set up the correct name:

jnk10.png

Finally, let’s run the job again. You can see that this time the job has finished successfully, and the Console Output shows correct result (which is the output of “uname -a” command):

jnk11.png

Also the kubectl shows that the POD contains only one container with our image:

jnk12.png

That’s why the container template name matter as stated in the article title!

I hope you went well through my configuration steps and it will save you a lot of time (in opposite to me), to customize your own build environment.

As always, thank you for your time and reading. Any feedback is as always highly appreciated.

See you in the next post!