Skip to content
Snippets Groups Projects
Select Git revision
  • master default protected
  • station-control
  • grafana-helm-chart
  • ST-415-web-pytest
4 results

ska-tango-grafana-exporter

  • Clone with SSH
  • Clone with HTTPS
  • user avatar
    Ugur Yilmaz authored
    f6d55ef8
    History

    TANGO-Grafana

    An investigations on the modifiability of Prometheus and grafana in a TANGO-controls context. The installation contains four data sources already configured:

    1. Tango database mysql,
    2. Tango archiver database,
    3. Prometheus and the tango-exporter,
    4. Elasticsearch.

    There are four example dashboards available:

    1. Tango-dashboard: use mainly prometheus and tango-rest to get general information about the tango control system,
    2. Tangodb: user the tangodb as data source to show the various tables of the system,
    3. Archiver: to get data from the archiver,
    4. State analysis: which includes command to call and the view on how the states changes over time-

    Together with the grafana default panel plugins, there are also four other panel plugins available which are:

    1. ajax plugin: to make http get call on the tango-rest (for instance),
    2. plotly: to create 2d/3d personalized panel,
    3. tango-attribute: to view the tango attributes of a device with a table,
    4. tango-command: to call a command with tango-gql-

    The pipeline in the repository is able to:

    • build and push the docker images to nexus,
    • publish the helm chart on nexus,
    • deploy directly in syscore (namespace tango-grafana).

    The following are a set of instructions of running the TANGO-Grafana on Kubernetes.

    Minikube

    Using Minikube enables us to create a single node stand alone Kubernetes cluster for testing purposes. If you already have a cluster at your disposal, then you can skip forward to 'Install the Helm chart'.

    The generic installation instructions are available at https://kubernetes.io/docs/tasks/tools/install-minikube/.

    Minikube requires the Kubernetes runtime, and a host virtualisation layer such as kvm, virtualbox etc. Please refer to the drivers list at https://github.com/kubernetes/minikube/blob/master/docs/drivers.md .

    On Ubuntu 18.04 for desktop based development, the most straight forward installation pattern is to go with the none driver as the host virtualisation layer. CAUTION: this will install Kubernetes directly on your host and will destroy any existing Kubernetes related configuration you already have (eg: /etc/kubernetes, /var/lib/kubelet, /etc/cni, ...). This is technically called 'running with scissors', but the trade off in the authors opinion is lower virtualisation overheads and simpler management of storage integration including Xauthority details etc.

    The latest version of minikube is found here https://github.com/kubernetes/minikube/releases . Scroll down to the section for Linux, which will have instructions like:

    curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

    Now we need to bootstrap minikube so that we have a running cluster based on kvm:

    sudo -E minikube start --vm-driver=none --extra-config=kubelet.resolv-conf=/var/run/systemd/resolve/resolv.conf

    This will take some time setting up the vm, and bootstrapping Kubernetes. You will see output like the following when done.

    $ sudo -E minikube start --vm-driver=none --extra-config=kubelet.resolv-conf=/var/run/systemd/resolve/resolv.conf
    😄  minikube v0.34.1 on linux (amd64)
    🤹  Configuring local host environment ...
    
    ⚠️  The 'none' driver provides limited isolation and may reduce system security and reliability.
    ⚠️  For more information, see:
    👉  https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md
    
    ⚠️  kubectl and minikube configuration will be stored in /home/ubuntu
    ⚠️  To use kubectl or minikube commands as your own user, you may
    ⚠️  need to relocate them. For example, to overwrite your own settings:
    
        ▪ sudo mv /home/ubuntu/.kube /home/ubuntu/.minikube $HOME
        ▪ sudo chown -R $USER /home/ubuntu/.kube /home/ubuntu/.minikube
    
    💡  This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
    🔥  Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
    📶  "minikube" IP address is 192.168.86.29
    🐳  Configuring Docker as the container runtime ...
    ✨  Preparing Kubernetes environment ...
        ▪ kubelet.resolv-conf=/var/run/systemd/resolve/resolv.conf
    🚜  Pulling images required by Kubernetes v1.13.3 ...
    🚀  Launching Kubernetes v1.13.3 using kubeadm ...
    🔑  Configuring cluster permissions ...
    🤔  Verifying component health .....
    💗  kubectl is now configured to use "minikube"
    🏄  Done! Thank you for using minikube!

    The --extra-config=kubelet.resolv-conf=/var/run/systemd/resolve/resolv.conf flag is to deal with the coredns and loopback problems - you may not need this depending on your local setup.

    Now fixup your permissions:

    sudo chown -R ${USER} /home/${USER}/.minikube
    sudo chgrp -R ${USER} /home/${USER}/.minikube
    sudo chown -R ${USER} /home/${USER}/.kube
    sudo chgrp -R ${USER} /home/${USER}/.kube

    Once completed, minikube will also update your kubectl settings to include the context current-context: minikube in ~/.kube/config. Test that connectivity works with something like:

    $ kubectl get pods -n kube-system
    NAME                               READY   STATUS    RESTARTS   AGE
    coredns-86c58d9df4-5ztg8           1/1     Running   0          3m24s
    ...

    Helm Chart

    The Helm Chart based install relies on Helm (surprise!). The easiest way to install is using the install script:

    curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
    chmod 700 get_helm.sh
    ./get_helm.sh

    Cleaning Up

    Note on cleaning up:

    minikube stop # stop minikube - this can be restarted with minikube start
    minikube delete # destroy minikube - totally gone!
    rm -rf ~/.kube # local minikube configuration cache
    # remove all other minikube related installation files
    sudo rm -rf /var/lib/kubeadm.yaml /data/minikube /var/lib/minikube /var/lib/kubelet /etc/kubernetes

    Tango-base

    The following set of instructions allows you to install the tango-base and webjive in kubernetes (namespace intergation):

    git clone https://gitlab.com/ska-telescope/skampi.git
    cd skampi
    make deploy HELM_CHART=tango-base
    make deploy HELM_CHART=webjive

    Traefik

    Install traefik with the following command:

    git clone https://gitlab.com/ska-telescope/skampi.git
    cd skampi
    make traefik EXTERNAL_IP=xxx.xxx.xxx.xxx

    Note that the external ip should be the internal ip of the machine.

    Install the Helm chart

    Once minikube is installed together with kubectl and helm, the following commands allows to install the project:

    git submodule update --init --recursive
    make install-chart

    Values

    The values file in the helm chart folder shows the options available. It is important to properly setup the following parameters:

    tango_exporter:
      tango_host: databaseds-tango-base-test.integration.svc.cluster.local:10000
    
    tango_gql_proxy:
      webjive_auth_url: http://webjive-webjive-test.integration.svc.cluster.local:8080/login
      tangogql_url: http://webjive-webjive-test.integration.svc.cluster.local:5004/db

    /etc/hosts

    The TANGO-grafana web engine will be available at the following hostname:

    xxx.xxx.xxx.xxx	grafana.integration.engageska-portugal.pt
    xxx.xxx.xxx.xxx	tangogql-proxy.integration.engageska-portugal.pt

    Note that this ip must be the ip of the machine where TANGO-grafana is running.