diff --git a/docs/source/grafana.rst b/docs/source/grafana.rst
index 4b7d51b2e293adb156c6b865beda5733a692b1c2..0eed2d00b6d01d88e70ac659ffb3df6f22d152ef 100644
--- a/docs/source/grafana.rst
+++ b/docs/source/grafana.rst
@@ -1,5 +1,5 @@
 Grafana: Monitoring Dashboards
-========================================
+------------------------------------------
 
 We use `Grafana <https://grafana.com/docs/grafana/latest/introduction/>`_ to visualise the monitoring information through a series of *dashboards*. It allows us to:
 
@@ -8,7 +8,7 @@ We use `Grafana <https://grafana.com/docs/grafana/latest/introduction/>`_ to vis
 * Add *alerts* to trigger on monitoring point formulas reaching a certain treshhold.
 
 Configuration
----------------------------------
+`````````````````````````````````
 
 Grafana comes with preinstalled datasources and dashboards, provided in the ``grafana-central/`` directory. By default, the following datasources are configured:
 
@@ -17,19 +17,19 @@ Grafana comes with preinstalled datasources and dashboards, provided in the ``gr
 * *Grafana API*, providing access to Grafana's API (see f.e. the `Grafana Alerting ReST API <https://editor.swagger.io/?url=https://raw.githubusercontent.com/grafana/grafana/main/pkg/services/ngalert/api/tooling/post.json>`_).
 
 Using Grafana
----------------------------------
+`````````````````````````````````
 
 Go to http://localhost:3001 to access the Grafana instance. The default guest access allows looking at dashboards and inspecting the data in the datasources manually. To create or edit dashboards, or change settings, you need to Sign In. The default credentials are ``admin/admin``.
 
 Adding alerts
----------------------------------
+`````````````````````````````````
 
 We use the `Grafana 8+ alerts <https://grafana.com/docs/grafana/latest/alerting/>`_ to monitor our system. You can add alerts to panels, or add free-floating ones under the ``(alarm bell) -> Alert rules`` menu, which is also used to browse the state of the existing alerts. Some tips:
 
 * Select the *Alert groups* tab to filter alerts or apply custom grouping, for example, by station or by component.
 
 Forwarding alerts to Alerta
----------------------------------
+`````````````````````````````````
 
 The alerts in Grafana come and go, without leaving a track record of ever having been there. To keep track of alerts, we forward them to our Alerta instance. This fowarding has to be configured manually:
 
diff --git a/docs/source/intro.rst b/docs/source/intro.rst
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..a6ecfc7626eff00801262fdcc1bf5c3e3d1c4fb8 100644
--- a/docs/source/intro.rst
+++ b/docs/source/intro.rst
@@ -0,0 +1,14 @@
+Introduction
+=====================================
+
+The Operations Central Monitoring setup provides you with the following user services:
+
+* A *Grafana* monitoring & alerting system, exposed on http://localhost:3001,
+* A *Alerta* alarm-management system, exposed on http://localhost:8081.
+
+As well as the following backing services to support the setup:
+
+* A *Prometheus* database that collects monitoring information of the instrument, exposed on http://localhost:9091,
+* A *Node Exporter* scraper that collects monitoring informatino of the host running this software stack, exposed on http://localhost:9100.
+
+.. hint:: The URLs assume you're running this software on localhost. Replace this with the hostname of the hosting system if you're accessing this software on a server.
diff --git a/docs/source/prometheus.rst b/docs/source/prometheus.rst
index 0a8637eb9f94ac3b5d31b1d74c6d101a0265a372..38f99cb33ea39787f79a2b5275618a4282a456c7 100644
--- a/docs/source/prometheus.rst
+++ b/docs/source/prometheus.rst
@@ -1,5 +1,5 @@
 Prometheus: Aggregating Monitoring Data
-========================================
+------------------------------------------
 
 We use `Prometheus <https://prometheus.io/docs/introduction/overview/>`_ to *scrape* monitoring data ("metrics") from across the telescope, and collect it into a single time-series database. Our Prometheus instance is running as the ``prometheus-central`` docker container, which periodically (every 10-60s) obtains metrics from the configured end points. This setup has several advantages:
 
@@ -8,7 +8,7 @@ We use `Prometheus <https://prometheus.io/docs/introduction/overview/>`_ to *scr
 * Widespread support. Many open-source packages already provide a Prometheus metrics end point out of the box.
 
 Configuration
----------------------------------
+`````````````````````````````````
 
 The scraping configuration is provided in ``prometheus-central/prometheus.yml``:
 
@@ -21,7 +21,7 @@ The following end points are scraped:
 * Local machine. Metrics from the machine running our containers is scraped (provided by the ``prometheus-node-exporter`` container).
 
 Inspection in Prometheus
----------------------------------
+`````````````````````````````````
 
 The Prometheus server provides a direct interface on http://localhost:9091 to query the database. PromQL allows you to specify which metric(s) to view, combine, filter, scale, etc. Some general statistics about the scraping process are provided by the following queries::
 
@@ -34,7 +34,7 @@ The Prometheus server provides a direct interface on http://localhost:9091 to qu
 NB: The timestamp(s) for which the data is requested is configured in a side channel. In this interface, it's a time picker defaulting to "now".
 
 Metrics and queries
----------------------------------
+`````````````````````````````````
 
 Prometheus stores each value as an independent metric, identified by a series *name*, string key-value *labels*, and a floar or integer value, for example (see also the `Prometheus Data Model <https://prometheus.io/docs/concepts/data_model/>`_)::
 
diff --git a/docs/source/stack.rst b/docs/source/stack.rst
new file mode 100644
index 0000000000000000000000000000000000000000..59d2b1e71c2ee5847d07f02807b56f37ea568999
--- /dev/null
+++ b/docs/source/stack.rst
@@ -0,0 +1,4 @@
+Software Stack
+===========================================
+
+The following sections describe how the software stack is setup, how it can be configured, and how to interact with it at a lower level.