Skip to content
Snippets Groups Projects
Commit d65fe09c authored by Jan David Mol's avatar Jan David Mol
Browse files

Merge branch 'L2SS-795-remove-alerta' into 'master'

L2SS-795: Removed Alerta docker images & containers

Closes L2SS-795

See merge request !341
parents 4301dcb0 4523abbf
No related branches found
No related tags found
1 merge request!341L2SS-795: Removed Alerta docker images & containers
Showing
with 0 additions and 640 deletions
[submodule "tangostationcontrol/tangostationcontrol/toolkit/libhdbpp-python"] [submodule "tangostationcontrol/tangostationcontrol/toolkit/libhdbpp-python"]
path = tangostationcontrol/tangostationcontrol/toolkit/libhdbpp-python path = tangostationcontrol/tangostationcontrol/toolkit/libhdbpp-python
url = https://gitlab.com/tango-controls/hdbpp/libhdbpp-python.git url = https://gitlab.com/tango-controls/hdbpp/libhdbpp-python.git
[submodule "docker-compose/alerta-web"]
path = docker-compose/alerta-web
url = https://github.com/jjdmol/alerta-webui
branch = add-isa-18-2-states
FROM alerta/alerta-web
RUN bash -c 'source /venv/bin/activate; pip install git+https://github.com/alerta/alerta-contrib.git#subdirectory=plugins/slack'
RUN bash -c 'source /venv/bin/activate; pip install git+https://github.com/alerta/alerta-contrib.git#subdirectory=plugins/jira'
COPY grafana-plugin /tmp/grafana-plugin
RUN bash -c 'source /venv/bin/activate; pip install /tmp/grafana-plugin'
COPY lofar-plugin /tmp/lofar-plugin
RUN bash -c 'source /venv/bin/activate; pip install /tmp/lofar-plugin'
COPY lofar-routing-plugin /tmp/lofar-routing-plugin
RUN bash -c 'source /venv/bin/activate; pip install /tmp/lofar-routing-plugin'
COPY alertad.conf /app/alertad.conf
COPY alerta.conf /app/alerta.conf
COPY config.json /web/config.json
You need:
* Your own Slack App:
* Give it channel write rights
* Get the OAuth token
* Install it in your slack
* Invite the app into your channel
* Feed the OAuth token to the config
* Add it to alerta-secrets.json
* Grafana:
* By default, grafana resends alarms every 4h, configure this in the notification settings to faster resend deleted alarms for testing
* Add alerts by hand
* add "Summary" as alert text
* add label "severity": "major"/"minor"/etc (see https://docs.alerta.io/webui/configuration.html#severity-colors)
* Create alerta-secrets.json in this directory:
Example alerta-secrets.json:
{
"SLACK_TOKEN": "xoxb-...",
"SLACK_CHANNEL": "#lofar20-alerta"
}
{
"SLACK_TOKEN": "xoxb-get-this-from-your-slack-app",
"SLACK_CHANNEL": "#your-channel"
}
[DEFAULT]
sslverify = no
output = presto
endpoint = http://localhost:8080/api
timezone = Europe/London
key = NpzX0z_fX8TVKZtXpzop-pi2MhaGnLawKVqbJBoA
debug = yes
import os
DEBUG = True
SECRET = "T=&7xvF2S&x7w_JAcq$h1x5ocfA)8H2i"
# Allow non-admin views
CUSTOMER_VIEWS = True
# Use more advanced ANSI/ISA 18.2 alarm model,
# which does not auto-close alarms and thus
# allows for tracking alarms that came and went.
ALARM_MODEL = "ISA_18_2"
# Never timeout alerts
ALERT_TIMEOUT = 0
# Auto unack after a day
ACK_TIMEOUT = 24 * 3600
# Auto unshelve after 2 hours
SHELVE_TIMEOUT = 7 * 24 * 3600
# Use custom date formats
DATE_FORMAT_MEDIUM_DATE = "dd DD/MM HH:mm"
DATE_FORMAT_LONG_DATE = "yyyy-MM-DD HH:mm:ss.sss"
# Default overview settings
COLUMNS = ['severity', 'status', 'createTime', 'lastReceiveTime', 'resource', 'grafanaDashboardHtml', 'grafanaPanelHtml', 'event', 'text']
DEFAULT_FILTER = {'status': ['UNACK', 'RTNUN']}
SORT_LIST_BY = "createTime"
AUTO_REFRESH_INTERVAL = 5000 # ms
COLOR_MAP = {
'severity': {
'Critical': 'red',
'High': 'orange',
'Medium': '#FFF380', # corn yellow
'Low': 'dodgerblue',
'Advisory': 'lightblue',
'OK': '#00CC00', # lime green
'Unknown': 'silver'
},
'text': 'black'
}
# Allow alerta-web to refer to alerta-server for the client
CORS_ORIGINS = [
'http://localhost:8081',
'http://localhost:8082',
os.environ.get("BASE_URL", ""),
os.environ.get("DASHBOARD_URL", ""),
]
# ------------------------------------
# Plugin configuration
# ------------------------------------
PLUGINS = ['reject', 'blackout', 'acked_by', 'enhance', 'grafana', 'lofar', 'slack']
# Slack plugin settings, see https://github.com/alerta/alerta-contrib/tree/master/plugins/slack
import json
with open("/run/secrets/alerta-secrets") as secrets_file:
secrets = json.load(secrets_file)
SLACK_WEBHOOK_URL = 'https://slack.com/api/chat.postMessage'
SLACK_TOKEN = secrets["SLACK_TOKEN"]
SLACK_CHANNEL = secrets["SLACK_CHANNEL"]
SLACK_ATTACHMENTS = True
BASE_URL = os.environ.get("BASE_URL", "")
# for the Slack message configuration syntax, see https://api.slack.com/methods/chat.postMessage
# and https://app.slack.com/block-kit-builder
SLACK_PAYLOAD = {
"channel": "{{ channel }}",
"emoji": ":fire:",
"text": "*{{ alert.severity|capitalize }}* :: *{{ alert.resource }}* :: _{{ alert.event }}_\n\n```{{ alert.text }}```",
"attachments": [{
"color": "{{ color }}",
"fields": [
{"title": "Device", "value": "{{ alert.attributes.lofarDevice }}", "short": True },
{"title": "Attribute", "value": "{{ alert.attributes.lofarAttribute }}", "short": True },
{"title": "Resource", "value": "{{ alert.resource }}", "short": True },
{"title": "Status", "value": "{{ status|capitalize }}", "short": True },
{"title": "Dashboards", "value": "<{{ config.BASE_URL }}/#/alert/{{ alert.id }}|Alerta>\nGrafana <{{ alert.attributes.grafanaDashboardUrl }}|Dashboard> <{{ alert.attributes.grafanaPanelUrl }}|Panel>", "short": True },
{"title": "Configure", "value": "Grafana <{{ alert.attributes.grafanaAlertUrl }}|View> <{{ alert.attributes.grafanaSilenceUrl }}|Silence>", "short": True },
],
}]
}
{"endpoint": "/api"}
import os
import json
import logging
from alerta.plugins import PluginBase
LOG = logging.getLogger()
class EnhanceGrafana(PluginBase):
"""
Plugin for parsing alerts coming from Grafana
"""
def pre_receive(self, alert, **kwargs):
# Parse Grafana-specific fields
alert.attributes['grafanaStatus'] = alert.raw_data.get('status', '')
def htmlify(link: str, desc: str) -> str:
return f'<a href="{link}" target="_blank">{desc}</a>';
# User-specified "Panel ID" annotation
panelURL = alert.raw_data.get('panelURL', '')
if panelURL:
alert.attributes['grafanaPanelUrl'] = panelURL
alert.attributes['grafanaPanelHtml'] = htmlify(panelURL, "Grafana Panel")
# User-specified "Dashboard UID" annotation
dashboardURL = alert.raw_data.get('dashboardURL', '')
if dashboardURL:
alert.attributes['grafanaDashboardUrl'] = dashboardURL
alert.attributes['grafanaDashboardHtml'] = htmlify(dashboardURL, "Grafana Dashboard")
alertURL = alert.raw_data.get('generatorURL', '')
if alertURL:
# expose alert view URL, as user may not have edit rights
# Convert from
# http://host:3000/alerting/kujybCynk/edit
# to
# http://host:3000/alerting/grafana/kujybCynk/view
alertURL = alertURL.replace("/alerting/", "/alerting/grafana/").replace("/edit", "/view")
alert.attributes['grafanaAlertUrl'] = alertURL
alert.attributes['grafanaAlertHtml'] = htmlify(alertURL, "Grafana Alert")
silenceURL = alert.raw_data.get('silenceURL', '')
if silenceURL:
alert.attributes['grafanaSilenceUrl'] = silenceURL
alert.attributes['grafanaSilenceHtml'] = htmlify(silenceURL, "Grafana Silence Alert")
return alert
def post_receive(self, alert, **kwargs):
return
def status_change(self, alert, status, text, **kwargs):
return
def take_action(self, alert, action, text, **kwargs):
raise NotImplementedError
from setuptools import setup, find_packages
version = '1.0.0'
setup(
name="alerta-grafana",
version=version,
description='Alerta plugin for enhancing Grafana alerts',
url='https://git.astron.nl/lofar2.0/tango',
license='Apache License 2.0',
author='Jan David Mol',
author_email='mol@astron.nl',
packages=find_packages(),
py_modules=['alerta_grafana'],
include_package_data=True,
zip_safe=True,
entry_points={
'alerta.plugins': [
'grafana = alerta_grafana:EnhanceGrafana'
]
},
python_requires='>=3.5'
)
import os
import json
import logging
from alerta.plugins import PluginBase
import alerta.models.alarms.isa_18_2 as isa_18_2
LOG = logging.getLogger()
class EnhanceLOFAR(PluginBase):
"""
Plugin for enhancing alerts with LOFAR-specific information
"""
@staticmethod
def _fix_severity(alert):
"""
Force conversion of severity to ISA 18.2 model, to allow Alerta to parse the alert.
For example, the 'prometheus' webhook by default uses the 'warning' severity,
but also users might specify a non-existing severity level.
"""
if alert.severity not in isa_18_2.SEVERITY_MAP:
# Save original severity
alert.attributes['unparsableSeverity'] = alert.severity
translation = {
"normal": isa_18_2.OK,
"ok": isa_18_2.OK,
"cleared": isa_18_2.OK,
"warning": isa_18_2.LOW,
"minor": isa_18_2.MEDIUM,
"major": isa_18_2.HIGH,
"critical": isa_18_2.CRITICAL,
}
alert.severity = translation.get(alert.severity.lower(), isa_18_2.MEDIUM)
def pre_receive(self, alert, **kwargs):
self._fix_severity(alert)
# Parse LOFAR-specific fields
for tag in alert.tags:
try:
key, value = tag.split("=", 1)
except ValueError:
continue
if key == "device":
alert.attributes['lofarDevice'] = value
if key == "name":
alert.attributes['lofarAttribute'] = value
if key == "station":
alert.resource = value
return alert
def post_receive(self, alert, **kwargs):
return
def status_change(self, alert, status, text, **kwargs):
return
def take_action(self, alert, action, text, **kwargs):
raise NotImplementedError
from setuptools import setup, find_packages
version = '1.0.0'
setup(
name="alerta-lofar",
version=version,
description='Alerta plugin for enhancing LOFAR alerts',
url='https://git.astron.nl/lofar2.0/tango',
license='Apache License 2.0',
author='Jan David Mol',
author_email='mol@astron.nl',
packages=find_packages(),
py_modules=['alerta_lofar'],
include_package_data=True,
zip_safe=True,
entry_points={
'alerta.plugins': [
'lofar = alerta_lofar:EnhanceLOFAR'
]
},
python_requires='>=3.5'
)
import logging
from alerta.app import alarm_model
from alerta.models.enums import ChangeType
LOG = logging.getLogger('alerta.plugins.routing')
# For a description of this interface,
# see https://docs.alerta.io/gettingstarted/tutorial-3-plugins.html?highlight=rules#step-3-route-alerts-to-plugins
def rules(alert, plugins, config):
if alert.previous_severity is None:
# The alert still has to be parsed, and enriched, before it is
# merged into existing alerts.
return rules_prereceive(alert, plugins, config)
else:
# The alert has been processed. Check to which plugins we
# want to send it.
return rules_postreceive(alert, plugins, config)
def rules_prereceive(alert, plugins, config):
""" Rules to determine which processing filters to use. """
# no filtering
return (plugins.values(), {})
def _is_new_problem(alert) -> bool:
""" Return whether the state change denotes a newly identified issue
on a system that (as far as the operator knew) was fine before.
Returns True when detecting NORM -> UNACK transitions, and False
on any duplicates of this transition.
Note that RTNUN -> UNACK is thus not triggered on. """
if alert.status != 'UNACK':
# Only report problems (not ACKing, SHELVing, etc)
return False
elif alert.last_receive_time != alert.update_time:
# Ignore anything that didn't update the alert,
# to avoid triggering on alerts that repeat
# the current situation
return False
else:
# Only report if the previous status was NORM, to avoid
# triggering on (f.e.) RTNUN -> UNACK transitions.
for h in alert.history: # is sorted new -> old
if h.status == alert.status:
# ignore any update that didn't change the status
continue
return h.status == "NORM"
# ... or if there was no previous status (a brand new alert)
return True
def rules_postreceive(alert, plugins, config):
""" Rules to determine which emission methods to use. """
# decide whether to notify the user on slack
send_to_slack = _is_new_problem(alert)
LOG.debug(f"Sending alert {alert.event} with status {alert.status} and severity {alert.previous_severity} => {alert.severity} to slack? {send_to_slack}")
# filter the plugin list based on these decisions
use_plugins = []
for name, plugin in plugins.items():
if name == 'slack' and not send_to_slack:
pass
else:
use_plugins.append(plugin)
return (use_plugins, {})
from setuptools import setup, find_packages
version = '1.0.0'
setup(
name="alerta-routing",
version=version,
description='Alerta plugin to configure LOFAR custom alert routing',
url='https://git.astron.nl/lofar2.0/tango',
license='Apache License 2.0',
author='Jan David Mol',
author_email='mol@astron.nl',
packages=find_packages(),
py_modules=['routing'],
include_package_data=True,
zip_safe=True,
entry_points={
'alerta.routing': [
'rules = routing:rules'
]
},
python_requires='>=3.5'
)
Subproject commit 9ee69dfbd0e33604169604b5a5cc506d560cb60b
version: '2.1'
volumes:
alerta-postgres-data: {}
secrets:
alerta-secrets:
file: alerta-server/alerta-secrets.json
services:
alerta-web:
build: alerta-web
container_name: alerta-web
networks:
- control
ports:
- 8081:80
depends_on:
- alerta-server
command: >
sh -c 'echo {\"endpoint\": \"http://\${HOSTNAME}:8082/api\"} > /usr/share/nginx/html/config.json &&
nginx -g "daemon off;"'
restart: always
alerta-server:
build: alerta-server
container_name: alerta-server
networks:
- control
ports:
- 8082:8080 # NOTE: This exposes an API and a web UI. Ignore the web UI as we replaced it with alerta-web
depends_on:
- alerta-db
secrets:
- alerta-secrets
environment:
- DEBUG=1 # remove this line to turn DEBUG off
- DATABASE_URL=postgres://postgres:postgres@alerta-db:5432/monitoring
- BASE_URL=http://${HOSTNAME}:8081
- DASHBOARD_URL=http://${HOSTNAME}:8081
- AUTH_REQUIRED=True
- ADMIN_USERS=admin #default password: alerta
- ADMIN_KEY=demo-key
restart: always
alerta-db:
image: postgres
container_name: alerta-db
networks:
- control
volumes:
- alerta-postgres-data:/var/lib/postgresql/data
environment:
POSTGRES_DB: monitoring
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
restart: always
Alerting
==================
We use the following setup to forward alarms:
- The Tango Controls `hdbpp subsystem <https://tango-controls.readthedocs.io/en/latest/administration/services/hdbpp/hdb++-design-guidelines.html>`_ archives data-value changes into a `TimescaleDB <http://timescale.com>`_ database,
- Grafana allows `Alert rules <https://grafana.com/docs/grafana/latest/alerting/>`_ to be configured, which poll TimescaleDB and generate an *alert* when the configured condition is met. It also maintains a list of currently firing alerts,
- `Alerta <https://alerta.io/>`_ is the *alert manager*: itreceives these alerts, manages duplicates, and maintains alerts until the operator explicitly acknowledges them. It thus also has a list of alerts that fired in the past.
Archiving attributes
```````````````````````
The attributes of interest will have to be *archived* periodically to be able to see them in Grafana, and thus to be able to define alerts for them. In Tango Controls, there is an *configuration manager* that provides an interface to manage what is archived, and one or more *event subscribers* to subscribe to event changes and forward them to the archive database.
The ``tangoncontrols.toolkit.archiver.Archiver`` class provides an easy interface to the archiver. It uses the ``device/attribute`` notation for attributes, f.e. ``STAT/SDP/1/FPGA_error_R``. Some of the functions it provides:
:add_attribute_to_archiver(attribute, polling_period, event_period): Register the given attribute every ``polling_period`` ms. Also attribute on changes with a maximum rate of ``event_period`` ms.
:remove_attribute_from_archiver(attribute): Unregister the given attribute.
:start_archiving_attribute(attribute): Start archiving the given attribute.
:stop_archiving_attribute(attribute): Stop archiving the given attribute.
:get_attribute_errors(attribute): Return any errors detected while trying to archive the attribute.
:get_subscriber_errors(): Return any errors detected by the subscribers.
So a useful idiom to archive an individual attribute is::
from tangostationcontrol.archiver import Archiver
archiver = Archiver()
attribute = "STAT/SDP/1/FPGA_error_R"
archiver.add_attribute_to_archiver(attribute, 1000, 1000)
archiver.start_archiving_attribute(attribute)
.. note:: The archive subscriber gets confused if attributes it archives disappear from the monitoring database. This can cause an archive subscriber to stall. To fix this, get a proxy to the event subscriber, f.e. ``DeviceProxy("archiving/hdbppts/eventsubscriber01")``, and remove the offending attribute(s) from thr ``ArchivingList`` property using ``proxy.get_property("ArchivingList")`` and ``proxy.put_property({"ArchivingList": [...])``.
Inspecting the database
`````````````````````````
The archived attributes end up in a `TimescaleDB <http://timescale.com>`_ database, exposed on port 5432, with credentials ``postgres/pasword``. Key tables are:
:att_conf: Describes which attributes are registered. Note that any device and attribute names are in lower case.
:att_scalar_devXXX: Contains the attribute history for scalar attributes of type XXX.
:att_array_devXXX: Contains the attribute history for 1D array attributes of type XXX.
:att_image_devXXX: Contains the attribute history for 2D array attributes of type XXX.
Each of the attribute history tables contains entries for any recorded value changes, but also for changes in ``quality`` (0=ok, >0=issues), and any error ``att_error_desc_id``. Futhermore, we provide specialised views which combine tables into more readable information:
:lofar_scalar_XXX: View on the attribute history for scalar attributes of type XXX.
:lofar_array_XXX: View on the attribute history for 1D array attributes of type XXX. Each array element is returned in its own row, with ``x`` denoting the index.
:lofar_image_XXX: View on the attribute history for 2D array attributes of type XXX. Each array element is returned in its own row, with ``x`` and ``y`` denoting the indices.
A typical selection could thus look like::
SELECT
date_time AS time, device, name, x, value
FROM lofar_array_boolean
WHERE device = 'stat/sdp/1' AND name = 'fpga_error_r'
ORDER BY time DESC
LIMIT 16
Attributes in Grafana
````````````````````````
The Grafana instance (http://localhost:3000) is linked to TimescaleDB by default. The query for plotting an attribute requires some Grafana-specific macros to select the exact data points Grafana requires::
SELECT
$__timeGroup(data_time, $__interval),
x::text, device, name,
value
FROM lofar_array_boolean
WHERE
$__timeFilter(data_time) AND name = 'fpga_error_r'
ORDER BY 1,2
The fields ``x``, ``device``, and ``name`` are retrieved as *string*, as that makes them labels to the query, which Grafana then uses to identify the different metrics for each array element.
.. hint:: Grafana orders labels alphabetically. To order the ``x`` element properly, one could use the ``TO_CHAR(x, '00')`` function instead of ``x::text`` to prepend values with 0.
Setting up alerts
```````````````````
We use the `Grafana 8+ alerts <https://grafana.com/docs/grafana/latest/alerting/>`_ to monitor our system, and the alerts are to be forwarded to our Alerta instance. Both our default set of alerts and this forwarding has to be post-configured after installation:
- Go to Grafana (http://localhost:3000) and sign in with an administration account (default: admin/admin),
- Go to ``(cogwheel) -> API keys`` and create an ``editor`` API key. Copy the resulting hash,
- Go to the ``docker-compose/grafana/`` source directory, and run::
./import-rules.py -c alerting.json -r rules.json -B <apikey> | bash
.. hint:: Whether Grafana can send alerts to Alerta can be tested by sending a `test alert <http://localhost:3000/alerting/notifications/receivers/Alerta/edit?alertmanager=grafana>`_.
The following enhancements are useful to configure for the alerts:
- You'll want to alert on a query, followed by a ``Reduce`` step with Function ``Last`` and Mode ``Drop Non-numeric Value``. This triggers the alert on the latest value(s), but keeps the individual array elements separated,
- In ``Add details``, the ``Dashboard UID`` and ``Panel ID`` annotations are useful to configure to where you want the user to go, as Grafana will generate hyperlinks from them. To obtain a dashboard uid, go to ``Dashboards -> Browse`` and check out its URL. For the panel id, view a panel and check the URL,
- In ``Add details``, the ``Summary`` annotation will be used as the alert description,
- In ``Custom labels``, add ``severity = High`` to raise the severity of the alert (default: Low). See also the `supported values <https://github.com/alerta/alerta/blob/master/alerta/models/alarms/isa_18_2.py#L14>`_.
Alerta dashboard
``````````````````
The Alerta dashboard (http://localhost:8081) provides an overview of received alerts, according to the ISA 18.2 Alarm Model. It distinguishes the following states:
- ``NORM``: the situation is nominal (any past alarm condition has been acknowledged),
- ``UNACK``: an alarm condition is active, which has not been acknowledged by an operator,
- ``RTNUN``: an alarm condition came and went, but has not been acknowledged by an operator,
- ``ACKED``: an alarm condition is active, and has been acknowledged by an operator.
Furthermore, the following rarer states are known:
- ``SHLVD``: the alert is put aside, regardless of its condition,
- ``DSUPR``: the alert is intentionally suppressed,
- ``OOSRV``: the alert concerns something out of service, and thus should be ignored.
Any alerts stay in the displayed list until the alert condition disappears, *and* the alert is explicitly acknowledged, shelved, or deleted:
- *Acknowledging* an alert silences it for a day, unless its severity rises,
- *Shelving* an alert silences it for a week, regardless of what happens,
- *Watching* an alert means receiving browser notifications on changes,
- *Deleting* an alert removes it until Grafana sends it again (default: 10 minutes).
See ``docker-compose/alerta-server/alertad.conf`` for these settings.
Several installed plugins enhance the received events:
- ``slack`` plugin forwards alerts to Slack (see below),
- Our own ``grafana`` plugin parses Grafana-specific fields and adds them to the alert,
- Our own ``lofar`` plugin parses and generates LOFAR-specific fields.
Slack integration
```````````````````
Our Alerta setup is configured to send alerts to Slack. To set this up, you need to:
- Create a Slack App: https://api.slack.com/apps?new_app=1
- Under ``OAuth & Permissions``, add the following ``OAuth Scope``: ``chat:write``,
- Install the App in your Workspace,
- Copy the ``OAuth Token``.
.. hint:: To obtain the ``OAuth Token`` later on, go to https://api.slack.com/apps, click on your App, and look under ``Install App``.
Now, edit ``docker-compose/alerta-server/alerta-secrets.json``:
.. literalinclude:: ../../../docker-compose/alerta-server/alerta-secrets.json
The ``SLACK_TOKEN`` is the ``OAuth Token``, and the ``SLACK_CHANNEL`` is the channel in which to post the alerts.
Any further tweaking can be done by modifying ``docker-compose/alerta-web/alertad.conf``.
Debugging hints
````````````````````````
- Grafana sends alerts to Alerta using the *Prometheus AlertManager* format, and thus uses the Prometheus webhook to do so. To see what Grafana emits, configure it to send to your custom https://hookbin.com/ endpoint,
- Grafana by default resends firing alerts every 4 hours, and we set this to 10 minutes. This means that if an alert was succesfully sent but lost (or deleted), it takes that long to get it back. For debugging, you may want to lower this to f.e. 10 seconds in the ``Alerting -> Notification policies`` settings of Grafana,
- Alerta has a plugin system which allows easily modifying the attributes of an alert (see ``docker-compose/alerta-web`` and https://github.com/alerta/alerta-contrib). To see which attributes an alert has, simply go to the alert in the web GUI, press *Copy*, and paste in your editor,
- Alerta allows a ``DEBUG=True`` parameter in ``docker-compose/alerta-web/alertad.conf`` to generate debug output.
...@@ -32,7 +32,6 @@ Even without having access to any LOFAR2.0 hardware, you can install the full st ...@@ -32,7 +32,6 @@ Even without having access to any LOFAR2.0 hardware, you can install the full st
devices/temperature-manager devices/temperature-manager
devices/configure devices/configure
configure_station configure_station
alerting
signal_chain signal_chain
beam_tracking beam_tracking
developer developer
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment