Skip to content

Kraken Kubernetes Installation

A Kubernetes installation is done in 3 steps:

Helm

Helm is a package manager for Kubernetes. Packages are called Charts. They help you define, install, and upgrade many Kubernetes applications.

Helm Installation

Having Helm installed locally eases the installation of Kraken.

It is simple using their installation script:

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

OctoPerf's Helm Repository

Kraken's Helm Chart is available on OctoPerf's Helm repository.

Add it to Helm with the command:

> helm repo add octoperf https://helm.octoperf.com
"octoperf" has been added to your repositories

Refresh Helm's repositories:

> helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "octoperf" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈ 

You may list available charts in OctoPerf's repository:

> helm search repo octoperf
NAME                        CHART VERSION   APP VERSION DESCRIPTION                                       
octoperf/enterprise-edition 11.4.1          11.4.1      Official OctoPerf Helm Chart for Enterprise-Edi...
octoperf/kraken             1.0.1           2.0.0-rc1   Official OctoPerf Helm Chart for Kraken 

Helm is now installed and configured, you can proceed to Krakren's Helm chart installation.

Kubernetes Cluster

Kubernetes is a platform for managing containerized workloads and services. The Kubernetes version of Kraken executes its tasks by creating Pods on the available nodes of a K8S cluster.

So, the first prerequisite is a functioning Kubernetes cluster with an Ingress controller installed. The cluster must be accessible from your local machine with the command kubectl: kubectl installation guide.

There are several ways to get your hands on a K8S cluster if you don't already have one available:

Kind

kind is a tool for running local Kubernetes clusters using Docker container "nodes". kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI. It is the easiest way to test Kraken K8S on a local machine.

Note

You may also consider Minikube to test Kraken, but be aware that it does not handle multiple Nodes.

First, you need to install Docker:

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

Then install Kind:

curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.7.0/kind-$(uname)-amd64
chmod +x ./kind
mv ./kind /usr/local/bin/kind

Create a Kind configuration file with an Ingress controller:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
    kubeadmConfigPatches:
      - |
        apiVersion: kubeadm.k8s.io/v1beta2
        kind: InitConfiguration
        nodeRegistration:
          kubeletExtraArgs:
            node-labels: "ingress-ready=true"
            authorization-mode: "AlwaysAllow"
    extraPortMappings:
      - containerPort: 80
        hostPort: 80
      - containerPort: 443
        hostPort: 443
  - role: worker
  - role: worker
  - role: worker

Start a local cluster with the command:

kind create cluster --config kind-config.yaml

The cluster is created with one control plane and three worker nodes:

Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.16.3) 🖼
 ✓ Preparing nodes 📦 
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Thanks for using kind! 😊

Activate the Ingress controller:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
kubectl patch deployments -n ingress-nginx nginx-ingress-controller -p '{"spec":{"template":{"spec":{"containers":[{"name":"nginx-ingress-controller","ports":[{"containerPort":80,"hostPort":80},{"containerPort":443,"hostPort":443}]}],"nodeSelector":{"ingress-ready":"true"},"tolerations":[{"key":"node-role.kubernetes.io/master","operator":"Equal","effect":"NoSchedule"}]}}}}'
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.26.2/deploy/static/provider/baremetal/service-nodeport.yaml

Your local cluster being ready, you can now install Helm.

Note

If you want to use local Docker images of Kraken, you can copy then into your Kind K8S cluster with the command: kind load docker-image octoperf/kraken-image:tag

Google Kubernetes Engine

Google’s GKE hosted Kubernetes platform is known to work with Helm.

To deploy on GKE you first need to create a cluster: Create GKE cluster, as well as the kubectl command tine tool and helm on your local computer.

Then install Google Cloud SDK on your local computer:

> https://cloud.google.com/kubernetes-engine/docs/quickstart#create_cluster
google-cloud-sdk 277.0.0 from Cloud SDK (google-cloud-sdk✓) installed

Login to GKE with the following command (a web page is opened to let you authenticate on the Cloud platform):

gcloud auth login

In the Google Cloud Platform console, select your Kubernetes cluster and click on the Connect button to open the following dialog:

GKE Connect

Copy/paste the given command in your local shell console:

gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project kraken
Fetching cluster endpoint and auth data.
kubeconfig entry generated for standard-cluster-1.

Verify that you are connected to your remote K8S cluster by listing its nodes:

> kubectl get nodes
NAME                                                STATUS   ROLES    AGE   VERSION
gke-standard-cluster-1-default-pool-695f33d2-52xc   Ready    <none>   28m   v1.13.11-gke.23
gke-standard-cluster-1-default-pool-695f33d2-jcrk   Ready    <none>   28m   v1.13.11-gke.23
gke-standard-cluster-1-default-pool-695f33d2-t6vj   Ready    <none>   28m   v1.13.11-gke.23

Your GKE cluster being ready and accessible locally, you can now install an NGINX Ingress using Helm:

> kubectl create namespace nginx-ingress
namespace/nginx-ingress created
> helm install --namespace nginx-ingress nginx-ingress stable/nginx-ingress --set rbac.create=true --set controller.publishService.enabled=true
NAME: nginx-ingress
LAST DEPLOYED: Wed Jan 29 13:53:31 2020
NAMESPACE: nginx-ingress
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
[...]

Verify that the Ingress controller is created with the following command:

> kubectl get service nginx-ingress-controller -n nginx-ingress
NAME                       TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)                      AGE
nginx-ingress-controller   LoadBalancer   10.0.13.243   34.77.19.240   80:32139/TCP,443:32384/TCP   98s

Proceed to the Kraken helm chart installation and your Kraken application will be available at the IP address given in the EXTERNAL-IP column.

For example, open the administration UI at http://34.77.19.240/administration.

Amazon EKS

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to stand up or maintain your own Kubernetes control plane.

The first step is to install the AWS command line client. On Ubuntu, use snap:

> sudo snap install aws-cli --classic
aws-cli 1.16.266 from Amazon Web Services (aws✓) installed

You can check the installation by displaying the aws client version:

> aws --version
aws-cli/1.16.266 Python/3.5.2 Linux/4.15.0-76-generic botocore/1.13.2

Then open your AWS management console and create a new IAM user dedicated to EKS:

  1. Open the IAM console at https://console.aws.amazon.com/iam/,
  2. In the navigation pane, choose Users, Create user,
  3. Select eksctl as the user name and Programmatic access for the AWS access type,
  4. Click on Next: Permissions,
  5. On the Set permissions page, select Attach existing policies directly and select AdministratorAccess in the policy list.
  6. Click on Next: Tags, Next: Review and Create user,
  7. Show and copy the Access key ID and Secret access key.

Now configure your AWS client to use this role:

> aws configure
AWS Access Key ID [None]: Previously copied Access key ID 
AWS Secret Access Key [None]: Previously copied Secret access key
Default region name [None]: us-west-2
Default output format [None]: json

Then install and configure the Amazon EKS client: eksctl.

> curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
> sudo mv /tmp/eksctl /usr/local/bin
> eksctl version
[ℹ]  version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.13.0"}

Using eksctl, create a cluster named kraken-test in the region us-west-2:

> eksctl create cluster \
> --name kraken-test \
> --version 1.14 \
> --region us-west-2 \
> --nodegroup-name standard-workers \
> --node-type t3.medium \
> --nodes 3 \
> --nodes-min 3 \
> --nodes-max 3 \
> --managed
[ℹ]  eksctl version 0.13.0
[ℹ]  using region us-west-2
[ℹ]  setting availability zones to [us-west-2c us-west-2a us-west-2d]
[ℹ]  subnets for us-west-2c - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for us-west-2a - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for us-west-2d - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  using Kubernetes version 1.14
[ℹ]  creating EKS cluster "kraken-test" in "us-west-2" region with managed nodes
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=kraken-test'
[ℹ]  CloudWatch logging will not be enabled for cluster "kraken-test" in "us-west-2"
[ℹ]  you can enable it with 'eksctl utils update-cluster-logging --region=us-west-2 --cluster=kraken-test'
[ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "kraken-test" in "us-west-2"
[ℹ]  2 sequential tasks: { create cluster control plane "kraken-test", create managed nodegroup "standard-workers" }
[ℹ]  building cluster stack "eksctl-kraken-test-cluster"
[ℹ]  deploying stack "eksctl-kraken-test-cluster"
[ℹ]  building managed nodegroup stack "eksctl-kraken-test-nodegroup-standard-workers"
[ℹ]  deploying stack "eksctl-kraken-test-nodegroup-standard-workers"
[✔]  all EKS cluster resources for "kraken-test" have been created
[✔]  saved kubeconfig as "/home/ubuntu/.kube/config"
[ℹ]  nodegroup "standard-workers" has 3 node(s)
[ℹ]  node "ip-192-168-57-19.us-west-2.compute.internal" is ready
[ℹ]  node "ip-192-168-75-34.us-west-2.compute.internal" is ready
[ℹ]  node "ip-192-168-8-214.us-west-2.compute.internal" is ready
[ℹ]  waiting for at least 3 node(s) to become ready in "standard-workers"
[ℹ]  nodegroup "standard-workers" has 3 node(s)
[ℹ]  node "ip-192-168-57-19.us-west-2.compute.internal" is ready
[ℹ]  node "ip-192-168-75-34.us-west-2.compute.internal" is ready
[ℹ]  node "ip-192-168-8-214.us-west-2.compute.internal" is ready
[ℹ]  kubectl command should work with "/home/ubuntu/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "kraken-test" in "us-west-2" region is ready

Warning

The process is quite long. It takes about 15 minutes to start the cluster.

Check that kubectl is configured to communicate with the created cluster and display available nodes:

kubectl get nodes
NAME                                          STATUS   ROLES    AGE     VERSION
ip-192-168-57-19.us-west-2.compute.internal   Ready    <none>   5m43s   v1.14.7-eks-1861c5
ip-192-168-75-34.us-west-2.compute.internal   Ready    <none>   5m45s   v1.14.7-eks-1861c5
ip-192-168-8-214.us-west-2.compute.internal   Ready    <none>   5m29s   v1.14.7-eks-1861c5

Your EKS cluster being ready and accessible locally, you can now install an NGINX Ingress using Helm:

> kubectl create namespace nginx-ingress
namespace/nginx-ingress created
> helm install --namespace nginx-ingress nginx-ingress stable/nginx-ingress --set rbac.create=true --set controller.publishService.enabled=true
NAME: nginx-ingress
LAST DEPLOYED: Fri Jan 31 11:46:33 2020
NAMESPACE: nginx-ingress
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
[...]

Verify that the Ingress controller is created with the following command:

> kubectl get service nginx-ingress-controller -n nginx-ingress
NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP                                                              PORT(S)                      AGE
nginx-ingress-controller   LoadBalancer   10.100.201.31   af4f1d0d0441611eabb6802297c6b675-585786825.us-west-2.elb.amazonaws.com   80:32717/TCP,443:31415/TCP   99s

Proceed to the Kraken helm chart installation and your Kraken application will be available at the IP address given in the EXTERNAL-IP column.

For example, open the administration UI at http://af4f1d0d0441611eabb6802297c6b675-585786825.us-west-2.elb.amazonaws.com/administration.

Note

Your can delete your cluster with the command eksctl delete cluster kraken-test --region us-west-2.

Kraken Helm Chart Installation

Before installing Kraken, you need to create the namespace octoperf on your Kubernetes cluster:

kubectl create namespace octoperf
> namespace/octoperf created

Finally, install Kraken with the following command:

> helm install --namespace octoperf kraken octoperf/kraken --version 1.0.2
NAME: kraken
LAST DEPLOYED: Mon Jan 27 17:21:02 2020
NAMESPACE: octoperf
STATUS: deployed
REVISION: 1
NOTES:
...

Note

Check for the latest version available on the Readme of Kraken's Helm chart.

Custom Configuration

You can customize your installation by using a custom values.yaml file, given with the parameter -f my-config.yaml.

For example:

> helm install --namespace octoperf kraken octoperf/kraken --version 1.0.2 -f config.yaml
NAME: kraken
LAST DEPLOYED: Mon Jan 27 17:21:02 2020
NAMESPACE: octoperf
STATUS: deployed
REVISION: 1
NOTES:
...

The content of the custom configuration file depends on what you want to edit. Please check the configuration section of the Readme for more information.

Note

The grafana.persistence.existingClaim entry is mandatory in a custom .yaml when upgrading Kraken in order the tell the Grafana Helm dependency to re-use the existing PVC: helm upgrade -f config.yaml --namespace octoperf kraken octoperf/kraken --version 1.0.2 --force

grafana:
  persistence:
    existingClaim: kraken-grafana

Following are some configuration samples to:

License Setup

Contact sales@octoperf.com to get a multi-hosts license for Kraken Kubernetes.

Create a config.yaml file with the following content to setup a license file:

backend:
  licenseFile: |
    # kraken community 2.0.0 community License (id: 1574693532324)
    8445f5f44c7f964b642a2a0368575854024639480c355888492744416ed0
    [Full content of your license.l4j file]
    9d715369689d53150569

Warning

For an existing installation, you need to remove the existing license configuration from your cluster: kubectl delete configmap -n octoperf kraken-backend-license

Upgrade your Kraken Helm chart with this configuration:

helm upgrade -f config.yaml --namespace octoperf kraken octoperf/kraken --version 1.0.0 --force

List all Kraken Pods:

> kubectl get pods -n octoperf
NAME                                        READY   STATUS            RESTARTS   AGE
kraken-administration-ui-677646ff64-5clw9   1/1     Running           0          69s
kraken-analysis-84db7d9685-qrr8q            1/1     Running           0          69s
kraken-documentation-857745cf65-v5np9       1/1     Running           0          69s
kraken-gatling-ui-77bbbffb96-xq5zw          1/1     Running           0          69s
kraken-grafana-7bd6fc9b7f-9nlcx             1/1     Running           0          69s
kraken-influxdb-0                           1/1     Running           0          69s
kraken-runtime-94c44fc48-ccr8x              1/1     Running           0          69s
kraken-static-65dd85c6b5-gh8rc              1/1     Running           0          69s
kraken-storage-7687bb8dcd-vxssq             1/1     Running           0          69s

Display logs for the Runtime backend container:

> kubectl logs kraken-runtime-94c44fc48-ccr8x -n octoperf
HOME=/home/kraken
JAVA_OPTS=-Xmx256m
[...]
2020-01-23 18:11:42.423  INFO 8 --- [           main] com.kraken.u                             : Your license allows you to run tasks on 10 host(s)
[...]
2020-01-23 18:11:43.574  INFO 8 --- [           main] com.kraken.Application                   : Started Application in 5.137 seconds (JVM running for 5.675)

The license capacity is displayed in the server startup logs.

Uninstall

The following procedure explains how to uninstall Kraken from an existing Kubernetes cluster:

  1. Delete the Kraken Helm chart helm delete kraken --namespace octoperf,
  2. Delete the OctoPerf namespace kubectl delete namespace octoperf.