Pod and Service Tutorial

In this tutorial we will run a simple stateless web application using Milpa. This will introduce the basics of milpa and milpactl usage. We assume you already have a working Milpa installation on a laptop or server.

Preliminaries

Install milpa and milpactl.

The tutorial files are installed with Milpa and are located at /opt/milpa/docs/tutorial-files. The files for creating the docker images used in this tutorial are also included with the tutorial files.

Running a Pod with Milpa

Make sure milpa is up and running, and you have a working server configuration file (located at /opt/milpa/etc/server.yml by default). The copy of server.yml installed by Milpa will need to be updated to specify a unique clusterName, AWS credentials and a valid license.

You will use the command line interface milpactl to interact with and control Milpa. Open a new terminal window on the same machine Milpa is running on and check to see that no Pods are running.

$ milpactl get pods
NAME      UNITS     RUNNING   STATUS    RESTARTS   NODE      IP        AGE

Also check to see that no Services are running. Typing "services" gets a bit tedious, milpactl also understands abbreviations for certain resource types.

$ milpactl get svc
NAME      PORT(S)   SOURCES     AGE

If there are zero Pods and zero Services running, there should be zero compute instances in the system. Let us verify this by getting Nodes on the system as follows.

$ milpactl get nodes
NAME      STATUS    INSTANCE-TYPE   INSTANCE   IP        AGE

We'll be creating a Pod that runs a simple "Hello World!" service. Take a look at the file helloworld-pod.yml in a text editor. You notice that the Pod has the name helloworld, it requests 0.4GiB memory and runs a docker image from elotl called helloserver.

Now lets create our pod.

$ milpactl create -f helloworld-pod.yml
helloworld

Once the Pod is created, Milpa will boot a compute node for the Pod to run on. The compute node type is determined by rounding up resources requested in the pod manifest to the most cost effective instance type offered by the cloud provider. Since we requested 0.4GiB memory, Milpa picks t3.nano (which has 0.5GiB memory) as the instance type for the Pod. Once the node boots, Milpa dispatches the Pod to the Node and the Units specified in the Pod spec will run on the Node.

To see the Pod status:

$ milpactl get pod helloworld
NAME         UNITS     RUNNING   STATUS        RESTARTS   ...
helloworld   1         1         Pod Running   0          ...

We can also see the entire Pod API object by specifying json or yaml output format. To see the full Pod status:

$ milpactl get -ojson pod helloworld
{
    "kind": "Pod",
    "apiVersion": "v1",
    "metadata": {
        "name": "helloworld",
        "labels": {
            "app": "helloworld"
        },
        "creationTimestamp": "2018-08-07T20:32:05.401180724-07:00",
...

milpactl get nodes should now display the right-sized t3.nano instance provisioned just-in-time for the Pod:

$ milpactl get nodes
NAME                                   STATUS    INSTANCE-TYPE   INSTANCE              IP              AGE
37621e3e-38b6-4dab-97e3-d719af861b9b   Claimed   t3.nano         i-0748af94da188ed10   54.236.63.193   3m

Once our Pod is running, we would like to interact with it. Get the IP address of the Pod from the previous command. The helloworld Pod runs a simple webserver that replies to any requests with "Hello Milpa". Lets see if we can reach it. The Pod's public network addresses are stored in a list in the Pod object at status.addresses. In that list, find the public IP of the Pod. If we try to query the hello Pod on port 8002, we'll see that our connection attempts fail.

$ curl --max-time 2 <Pod Public IP Address>:8002

We have a running Pod but our cloud firewall (a Security Group in EC2) isn't letting any traffic into the Pod through the public IP address. To allow public trafic to our Node, we must create a Service. Use the helloworld-svc.yml manifest to create a public Service that opens up TCP port 8002 to the world.

$ milpactl create -f helloworld-svc.yml

Milpa applies Services to Pods by using a system of labels and selectors. This is the same system used by Kubernetes. The extensive kubernetes documentation on labels and selectors is directly translatable to Milpa. Briefly, the Pod has a set of user defined key value pairs called labels. The selector in the service app: helloworld is used to match any Pod in Milpa with labels equal to app: helloworld.

Once the Service is created and the changes are propagated to the cloud (shouldn't take more than a second or two), we can curl our Service and see the output.

$ curl --max-time 2 <Pod Public IP Address>:8002
Hello Milpa from ip-172-31-51-36.ec2.internal - 172.31.51.36
Env Vars:
    MILPA_VAR=milpa-env-var-value

As a last step, we'll see what happend on the Node by accessing the logs of the Unit on the helloworld Pod.

$ milpactl logs helloworld -u helloserver

To clean up, delete the Pod and Service.

$ milpactl delete pod helloworld
$ milpactl delete svc helloworld-svc

After deletion, verify that the compute Node list is empty:

$ milpactl get nodes
NAME      STATUS    INSTANCE-TYPE   INSTANCE   IP        AGE

Deployment Tutorial

This tutorial guides the user through the process of creating an nginx Deployment and then investigates some features of Milpa Deployments.

A Milpa Deployment is used to create and manage a set of multiple, identical Pods. If any of the Pods are destroyed or become unresponsive, Milpa will create new Pods to replace the failed Pods that are no longer running. Inside the Deployment resouce is a PodTemplate that fully describes the Pod that the deployment should create. When the user updates the Deployment's PodTemplate, Milpa will take care of rolling out a new set of Pods and destroying the existing Pods. Deployments are frequently used to roll out software updates to a production environment.

Preliminaries

The tutorial files are installed with Milpa and are located at /opt/milpa/docs/tutorial-files.

This tutorial builds upon the previous tutorial and requires the user to be able to run milpa and issue commands with milpactl. Refer to the Pod and Service Tutorial or the main Milpa documentation for information on how to run those applications.

Running a Deployment

We'll start by looking at a Deployment resource. Open the file nginx-deploy.yml in a text editor or viewer. In the resource file we see a section template that describes a simple PodTemplate that will run an nginx image. Above the template section we see the most important part of the Deployment spec: the number of replicas of the nginx Pod this Deployment will create. Right now that value is set to 1 but we'll be updating that value later in the tutorial.

Lets create the deployment in Milpa with milpactl create

$ milpactl create -f nginx-deploy.yml

Behind the scenes, a Deployment creates a ReplicaSet for the current version of the Deployment. ReplicaSets are a resource that takes care of creating and running multiple identical Pods. Lets take a look at the ReplicaSet and Pod objects now.

$ milpactl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   1         1         1            1           1m
# rs is an alias for ReplicaSet
$ milpactl get rs
NAME                             DESIRED   CURRENT   AGE
nginx-deployment-1533682416739   1         1         1m
# po is an alias for pod.
$ milpactl get po
NAME                                   UNITS     RUNNING   STATUS        RESTARTS
nginx-deployment-1533682416739-zwrt4   1         1         Pod Running   0 ...

Scaling a Deployment

So far, a deployment doesn't appear to get us much more functionality than a single running pod. Lets make it more interesting and scale up the number of replicas from 1 to 3 and start to see the advantages of deployments. There are 2 ways to update the number of replicas in a Deployment. It's possible to edit nginx-deploy.yml, change the value in the spec.replicas field and call milpactl update -f nginx-deploy.yml. An easier way to change the number of replicas in a Deployment resource is to use milpactl's scale command:

$ milpactl scale --replicas=3 -f nginx-deploy.yml

We can watch rollout of the new nginx Pods.

# use Ctrl-c to exit
$ watch milpactl get pods
Every 2.0s: milpactl get pods

NAME                                   UNITS     RUNNING   STATUS           RESTARTS
nginx-deployment-1533682416739-ddt0x   1         1         Pod Running   0
nginx-deployment-1533682416739-sr8xd   1         1         Pod Running   0
nginx-deployment-1533682416739-zwrt4   1         1         Pod Running   0

Deployments have a couple of advantages over deploying bare Pods.

As mentioned before, Deployments can be used to roll out updates. Any change to the Pod template spec will cause a new ReplicaSet to be created and the Deployment will adjust the number of replicas in the two running ReplicaSets to slowly roll out the updated Pods. The old ReplicaSet will be adjusted downward, the other upward until only the new ReplicaSet is running. Rollouts will be shown in the Advanced Deployment Tutorial.

By using ReplicaSets, deployments will also ensure that the number of Pods running matches the desired number of replicas. If we delete one of our Pods, the pod will be recreated. Lets test this out and terminate one of the nginx Pods.

# note: your pods will have different names
$ milpactl delete po nginx-deployment-1533682416739-zwrt4

$ watch milpactl get po
Every 2.0s: milpactl get pods

NAME                                   UNITS     RUNNING   STATUS           RESTARTS
nginx-deployment-1533682416739-5hbgv   1         0         Pod Waiting      0  ...
nginx-deployment-1533682416739-ddt0x   1         1         Pod Running      0  ...
nginx-deployment-1533682416739-sr8xd   1         1         Pod Running      0  ...
nginx-deployment-1533682416739-zwrt4   1         1         Pod Terminated   0  ...

Under the hood, the ReplicaSet created by the Deployment noticed that it no longer satisfied the desired number of replicas and it created another Pod.

Cleaning up Resources

Lets delete everything we've created.

# specifying a file for delete will delete all the resources in the file
$ milpactl delete -f nginx-deploy.yml

Deleting a Deployment will delete the Deployment resource as well as the ReplicaSet and Pod resources the Deployment created. To prevent this behavior and keep the ReplicaSets and Pods, add --cascade=false to the delete command.

That's it for our first look at Deployments. In the AdvancedDeploymentTutorial we'll look at adding in a HAProxy load balancer for our replicated service and rolling out updates to our Deployment.

Multi-Tier Application Deployment

In this tutorial we will deploy a multi-tier chat application using Milpa. The tutorial consists of the following components.

The Rocket.Chat server will use DNS to find the MongoDB backend so ensure that Milpa is configured to register services in a private DNS zone for service discovery (Milpa will automatically create the zone if this setting is enabled). DNS service discovery is enabled by default in a fresh install of Milpa. To check whether DNS service discovery is enabled, make sure the following section of server.yml is uncommented.

serviceDiscovery:
  privateDNS:
    ttl: 30

Create the MongoDB Pod and Service

Rocket.Chat uses a single MongoDB instance to store its state and multiple Rocket.Chat processes can share a single MongoDB backend. MongoDB also supports a high availability setup with multiple replicas but setting that up is outside the scope of this tutuorial. The mongo.yml manifest file includes a service resource that will register the MongoDB server in DNS and open port 27017 to the VPC so that the Rocket.Chat service can find and access the database.

Go ahead and take a look at the contents of mongo.yml then create the mongodb pod and service with milpactl.

$ milpactl create -f mongo.yml

Ensure that the mongo Pod reaches a running state.

$ milpactl get pods
NAME      UNITS     RUNNING   STATUS                                      RESTARTS
mongo     2         1         Pod Running - Unit Terminated: ExitCode:0   1

In the output we see that the second Unit in the manifest, init-replicaset, has stopped running and completed successfully. This is expected since the Unit runs a single command to initalize the MongoDB replicaset. It will not be restarted because the restartPolicy of the Pod is OnFailure.

Create the Rocket.Chat Deployment

The rocketchat application will run as a Deployment with three replicas. The manifest sets a couple of environment variables to configure Rocket.Chat and tell it where to find the MongoDB backend. The rocketchat unit requires a little under 3GB of memory but is single threaded so we'll include a resource request for 1 CPU and 3GB of memory. Milpa will choose the cheapest instance that matches those specs.

$ milpactl create -f rocketchat.yml

The rocketchat app is a bit large so it'll take a bit of time for the Unit's image to download and begin running.

Adding a Load Balancer Service

We'll create a Service of type: LoadBalancer to serve as an ingress point into the system spread requests across the rocketchat Pods. In Milpa, a load balancer service will provision and configure a cloud load balancer (an AWS Classic ELB) and automatically configure the load balancer to forward requests to the service's matching Pods. The full manifest for the load balancer is shown below:

---
apiVersion: v1
kind: Service
metadata:
  name: rocketchat
  labels:
    svc: rocketchat
  annotations:
    service.elotl.co/aws-load-balancer-healthcheck-timeout: "2"
    service.elotl.co/aws-load-balancer-healthcheck-unhealthy-threshold: "2"
    service.elotl.co/aws-load-balancer-healthcheck-healthy-threshold: "2"
    service.elotl.co/aws-load-balancer-healthcheck-interval: "5"
    service.elotl.co/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
spec:
  type: LoadBalancer
  selector:
    matchLabels:
      app: rocketchat
  ports:
    - name: http
      protocol: TCP
      port: 80
      nodePort: 3000
  sourceRanges:
    - 0.0.0.0/0

A number of annotations have been added to the load balancer to specify AWS specific options. Most of these options have been added to ensure the load balancer comes up and attaches to the backend Pods quickly. Additional annotations are available for additional configuration of the ELB. Please refer to the Services section of the Milpa reference manual to see all supported annotations.

Now that we have a feeling for what LoadBalancer services look like, go ahead and create the load balancer service.

$ milpactl create -f rocketchat-svc.yml

Accessing the Application

Once the load balancer is created and connected to the backend Pods in AWS (it should take about 10-20 seconds to be created and pass healthchecks), we can setup and start using Rocket.Chat. Get the DNS name of the load balancer using milpactl:

$ milpactl get svc rocketchat
NAME         PORT(S)   SOURCES     INGRESS ADDRESS                                                          AGE
rocketchat   80/TCP    0.0.0.0/0   milpa-4fgbztsqwiwbiudujkkidtszsi-810864342.us-east-1.elb.amazonaws.com   8m

In a web browser, navigate to the load balancer's ingress address. You should see the Rocket.Chat startup page. If you're interested in using Rocket.Chat and kicking the tires, go ahead and fill in the initial details (username, email address, etc.), create a user and start chatting.

Cleaning Up

To clean up the application, delete the mongo Pod, the rocketchat Deployment and the Services. All cloud resources created in this tutorial will be deleted when the Milpa resources are deleted.

$ milpactl delete pod mongo
mongo
$ milpactl delete deploy rocketchat
rocketchat
$ milpactl delete svc mongo
mongo
$ milpactl delete svc rocketchat
rocketchat

Advanced Deployment Tutorial

In this tutorial, we will create a replicated backend service behind a HAProxy load balancer. The backend service is the same image used in the Pod and Service Tutorial. It will simply print out a "Hello Milpa!" message along with any environment variables that have "milpa" in the name of the environment variable. Milpa will use a Service to register the replicated Pods in an AWS private DNS zone. The HAProxy load balancer has been configured to dynamically update its backends using the SRV record for the backend service. This system allows dynamic scaling of backend services and minimizes downtime during deploys.

Tutorial Setup

Tutorial Setup

Preliminaries

Make sure milpa is up and running, and you have a working server configuration file (located at /opt/milpa/etc/server.yml by default).

The tutorial files are installed with Milpa and are located at /opt/milpa/docs/tutorial-files. The files for creating the docker images used in this tutorial are also included with the tutorial files.

The HAProxy Pod is configured to read the addresses backend servers from a private AWS DNS zone that Milpa has created. Milpa should be configured to register servers in a private DNS zone for service discovery (Milpa will automatically create the zone if this setting is enabled). DNS service discovery is enabled in a fresh install of Milpa. To check whether DNS service discovery is enabled, make sure the following section of server.yml is uncommented.

serviceDiscovery:
  privateDNS:
    ttl: 30

You'll need to use a text editor to update the HAProxy Pod's resource file (milpa-tutorial-files/haproxy.yml) with an environment variable for your MILPA_CLUSTER_NAME. These variables are used in HAProxy's configuration (shown later in this tutorial)

---
apiVersion: v1
kind: Pod
metadata:
  name: haproxy
  labels:
    app: haproxy
spec:
  instanceType: t3.nano
  units:
    - name: haproxy
      image: elotl/haproxylb:latest
      command: ["haproxy", "-f", "/usr/local/etc/haproxy/haproxy.cfg"]
      env:
        - name: MILPA_CLUSTER_NAME
          value: <FILL_IN>

MILPA_CLUSTER_NAME is the name of the cluster specified in server.yml (typically it's the second parameter in the file).

Running the HAProxy Pod

Create the HAProxy loadbalancer Pod and a Service to expose that Pod to the world on port 80.

$ milpactl create -f haproxy.yml

The HAProxy server is configured to forward connections to a maximum of 2 backend servers pointed to by the SRV record _hello._tcp.backend-svc.default.${MILPA_CLUSTER_NAME}.local. HAProxy will interpret environment variables in its configuration so ${MILPA_CLUSTER_NAME} will be replaced with the value specified in the Pod manifest. Later in this tutorial, we'll use Milpa to create a Service for our backend servers that will be responsible for populating this record with the DNS names of the servers.

Here's the relevant portion of the HAProxy config (the full file can be found at /opt/milpa/docs/tutorial-files/haproxy/haproxy.cfg):

resolvers awsdns
    nameserver dns0 169.254.169.253:53

backend hello-backend
    mode    tcp
    balance roundrobin
    server-template myapp 2 "_hello._tcp.backend-svc.default.${MILPA_CLUSTER_NAME}.local" \
        check resolvers awsdns

Wait for the HAProxy Pod to be running and then get the IP address of the Pod using milpactl. The IP address of the Pod is in the HAProxy Pod's spec.

$ milpactl get pod -oyaml haproxy | grep -i -A1 -B1 address
status:
  addresses:
  - address: 172.31.44.62
    type: PrivateIP
  - address: ip-172-31-44-62.ec2.internal
    type: PrivateDNS
  - address: 54.236.63.193
    type: PublicIP
  - address: ec2-54-236-63-193.compute-1.amazonaws.com
    type: PublicDNS

In a third terminal window, start curling the HAProxy IP address. Without any backend Pods, you should be seeing empty replies from the server.

$ ./curl-servers.sh <IP address of HAProxy instance>
curl: (52) Empty reply from server
curl: (52) Empty reply from server
...

Create a Replicated Backend Deployment

Start the backend Deployment and Service (both resources are contained in the backend-deploy.yml file).

$ milpactl create -f backend-deploy.yml

Once the backend Pods are running, Milpa will register the running Pods in an SRV record in AWS Route53. HAProxy will pick up their addresses from the DNS SRV record, add those Pods to the backend and start passing HTTP requests to them once a couple of healthchecks pass. Since we are not using any standby Nodes, it might take a minute or two for the Pods to start running. You should eventually see curl reporting something similar to the following output.

Hello Milpa! This is ip-172-31-29-203.ec2.internal - 172.31.29.203
Milpa Vars:
    MY_MILPA_VAR=variable_value_1
Hello Milpa! This is ip-172-31-81-178.ec2.internal - 172.31.81.178
Milpa Vars:
    MY_MILPA_VAR=variable_value_1
Hello Milpa! This is ip-172-31-29-203.ec2.internal - 172.31.29.203
Milpa Vars:
    MY_MILPA_VAR=variable_value_1
...

Update the Deployment

Deployments allow a user to slowly roll out updates to an application with limited downtime. Any changes to the Pod spec will trigger a rollout of new Pods. To show this, use a text editor to update the value of the MY_MILPA_VAR environment variable in the Pod spec (backend-deploy.yml) to be variable_value_2. This is a simplistic example, a more common use case for deployments would be to roll out a new version of the backend server by updating the tag of the deployed image.

---
apiVersion: v1
kind: Deployment
metadata:
  name: backend-deployment
spec:
  replicas: 2
  maxSurge: 1
  maxUnavailable: 0
  template:
    metadata:
      labels:
        app: backend
    spec:
      instanceType: t3.nano
      units:
        - name: helloserver
          image: elotl/helloserver:latest
          command:
          - /helloserver
          env:
            - name: MY_MILPA_VAR
              value: variable_value_2 # <- any change rolls out an update

Update the deployment using milpactl update.

$ milpactl update -f backend-deploy.yml

Like the original rollout, after a new Pod starts running, it's added to DNS and eventually added to HAProxy's backends. Since we have specified a maxSurge of 1 and a maxUnavailable of 0 in the deployment spec, we'll be running an old version of the backend server alongside an updated version until the rollout is complete.

We know the rollout is complete when we see responses from the two servers contain the updated environment variable value.

Cleaning up Resources

Once the rollout is complete it'll be necessary to shut everything down and cleanup all the resources Milpa has created.

$ milpactl delete -f haproxy.yml
$ milpactl delete -f backend-deploy.yml

Deleting a Deployment will delete the Deployment resource as well as the ReplicaSet and Pod resources the deployment created. To prevent this behavior and keep the ReplicaSets and Pods, add --cascade=false to the delete command.

Congratulations! In this tutorial we've seen how to use Milpa to run Pods, run Deployments and use Services for service discovery. We've also seen how to update a Deployment and delete Milpa resources.

Jenkins Tutorial

Overview

In this tutorial we will create a Pod that runs a Jenkins build server using Milpa. The Jenkins server will be configured to run unit tests for a simple webserver hosted in GitHub. Unlike a standard Jenkins setup, the unit tests themselves will not run on the Jenkins master or a Jenkins agent/slave, instead Jenkins will spawn a Milpa Pod (on a new cloud instance) to run those tests. When the tests are complete, the instance will be shut down, Jenkins will record success or failure of the tests and have a full log from the tests that were run.

While this is a simple example, it attempts to show that Jenkins and Milpa together can be used to create an incredibly scalable build system with possibly hundreds of builds running in parallel and automatically scaling up and down, depending on developer demand.

Prerequisites

The Jenkins Pod we create will need to be able to access the Milpa controller's API port. As such the following prerequisites are necessary for the tutorial to work.

  1. The Milpa Controller must be running on an instance running on the cloud provider and inside the same VPC/Virtual Network that Milpa Pods will be launched into. To see what VPC the controller is configured to run in, look at cloud.aws.vpcID or cloud.azure.virtualNetworkName in Milpa's server.yml configuration file.
  2. Milpa's API port (TCP port 54555) on the instance running Milpa must be reachable from within the VPC. Verify that TCP port 54555 has been opened to the VPC in the Milpa Controller instance's security group.
  3. The VPC should be configured so that Milpa Pods can reach an S3 endpoint on the public internet and download from that endpoint. This is the default configuration for Milpa.
  4. Milpa's DNS service discovery must be enabled (it is enabled by default). If you have made modifications to server.yml, ensure that the serviceDiscovery section of server.yml is uncommented as show below.
serviceDiscovery:
  privateDNS:
    ttl: 30

Run the Jenkins Pod and Service

If the Milpa Controller isn't running go ahead and start it.

$ sudo service milpactl start

Take a look at the jenkins.yml manifest in the docs/tutorial-files folder. This manifest will create a Jenkins Pod and create a load balancer for it. For demonstration purposes, we're running a bit fast and loose and have opened the Jenkins HTTP port to the world. If you want to limit the source of allowed traffic to Jenkins, feel free to edit the sourceRanges cidrs in the service. If exposing Jenkins via HTTP is too risky for your account, see the end of this tutorial for information on how to configure HTTPS for Jenkins.

Start the Jenkins Pod:

$ milpactl create -f jenkins.yml
jenkins-tutorial-lb
jenkins-tutorial

The jenkins-tutorial Pod will take about a minute to launch and configure itself. Once it is up and running, the load balancer will take another minute for the required number of healthchecks to pass. While waiting for everything to start, take a look at the Pod spec in jenkins.yml. You'll see there's an initUnit that is responsible for pulling a tar file from S3 and untaring it to a data volume shared with the jenkins unit. The tar file contains the necessary configuration for a fully functioning Jenkins server that we've created for this tutorial. Once the initUnit is finished running, the jenkins unit will start up and be fully configured.

Run a Build on Jenkins and Milpa

To verify the Jenkins server is running, query the Pod and wait until the status is "Pod Running"

$ milpactl get pod jenkins-tutorial
NAME               UNITS     RUNNING   STATUS ...
jenkins-tutorial   1         1         Pod Running  ...

Once the jenkins-tutorial Pod is up and running, query milpactl for the load balancer's address:

$ milpactl get svc jenkins-tutorial-lb
NAME                  PORT(S)   SOURCES     INGRESS ADDRESS
jenkins-tutorial-lb   80/TCP    0.0.0.0/0   milpa-iyl45fw4plqwg52ge7vvppxgya-299586381.us-east-1.elb.amazonaws.com

For simplicity, we've configured Jenkins to listen on HTTP and the loadbalancer to forward port 80 to the jenkins-tutorial Pod. Copy the Service's ingress address into a web browser's location bar to navigate to Jenkins. You should see the Jenkins login screen.

Login to Jenkins with the following credentials:

Username: milpauser
Password: Milpa84QVeht

When you login, you should see Jenkins is configured with a single project, helloserver. Helloserver is a webserver, written in Go that responds to all incoming requests with a reply like:

Hello
You have reached ip-172-31-32-54.ec2.internal - 172.31.32.54
Env Vars
<list of server environment variables>

You can find the source code for helloserver at https://github.com/elotl/helloserver. Helloserver's test suite is a single test that checks that certain words are present in the output of the http handler.

To run the test suite, click on "hellosever" in the Jenkins home screen:

In the helloserver project, click "Build Now" to start a build. After a few seconds, you should see the build running.

In the new build, follow along with the logs (Console Output) to see what's happening. It'll take a second for a new Milpa Pod to start running but, once it starts up, the test suite will run and the build will quickly pass.

How the Build Works

On the surface, the build process doesn't look much different from a regular Jenkins build. However, the helloserver project has a custom executor that starts a new Milpa Pod, packages up the build environment, ships it to the new Milpa Pod and runs the tests on there. A picture shows the steps involved.

Jenkins Build via Milpa

Jenkins Build via Milpa

  1. At the start of the build, Jenkins creates a new Pod using milpactl on the Jenkins server.
  2. The build environment is shipped to the new Pod.
  3. The tests are run on the new Pod and the test output and result are streamed back to Jenkins (via Milpa).

Build Environment Variables

The following environment variables can be used to customize the build:

Further exercises

Cleanup

Delete the Jenkins Service and Pod.

$ milpactl delete -f jenkins.yml
jenkins-tutorial-lb
jenkins-tutorial