Skip to content
Snippets Groups Projects
Commit 3a303670 authored by Stuart Mark James's avatar Stuart Mark James
Browse files

Update documentation, moved some docs to consolidated project

parent 2aa2a439
No related branches found
No related tags found
No related merge requests found
...@@ -7,6 +7,18 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0. ...@@ -7,6 +7,18 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
## [Unreleased] ## [Unreleased]
### Changed
- Moved some system documentation to hdbpp-timescale-project (the consolidated project).
- Consolidated remaining build/install instructions into README
- Modified build system to use fetch libhdbpp and include it when requested. This is an aid to development.
- Supported LIBHDBPP_PROJECT_BUILD flag, that is injected into the build from hdbpp-timescale-project
- Made compatible with new libhdbpp (namespace and path changes)
### Removed
- Removed the embedded version of libhdbpp (the build can now source it at build time)
## [0.11.2] - 2020-01-23 ## [0.11.2] - 2020-01-23
### Fixed ### Fixed
......
...@@ -8,7 +8,19 @@ ...@@ -8,7 +8,19 @@
- [Bug Reports + Feature Requests](#Bug-Reports--Feature-Requests) - [Bug Reports + Feature Requests](#Bug-Reports--Feature-Requests)
- [Documentation](#Documentation) - [Documentation](#Documentation)
- [Building](#Building) - [Building](#Building)
- [Dependencies](#Dependencies)
- [Toolchain Dependencies](#Toolchain-Dependencies)
- [Build Dependencies](#Build-Dependencies)
- [Building Process](#Building-Process)
- [Build Flags](#Build-Flags)
- [Standard CMake Flags](#Standard-CMake-Flags)
- [Project Flags](#Project-Flags)
- [Running Tests](#Running-Tests)
- [Unit Tests](#Unit-Tests)
- [Benchmark Tests](#Benchmark-Tests)
- [Installing](#Installing) - [Installing](#Installing)
- [System Dependencies](#System-Dependencies)
- [Installation](#Installation)
- [License](#License) - [License](#License)
HDB++ backend library for the TimescaleDb extenstion to Postgresql. This library is loaded by libhdbpp to archive events from a Tango Controls system. Currently in a pre v1 release phase. HDB++ backend library for the TimescaleDb extenstion to Postgresql. This library is loaded by libhdbpp to archive events from a Tango Controls system. Currently in a pre v1 release phase.
...@@ -46,11 +58,164 @@ Please file the bug reports and feature requests in the issue tracker ...@@ -46,11 +58,164 @@ Please file the bug reports and feature requests in the issue tracker
## Building ## Building
See [build.md](doc/build.md) in the doc folder To build the shared library please read the following.
### Dependencies
The project has two types of dependencies, those required by the toolchain, and those to do the actual build. Other dependencies are integrated directly into the project as submodules. The following thirdparty modules exists:
* libpqxx - Modern C++ Postgresql library (Submodule)
* spdlog - Logging system (Submodule)
* Catch2 - Unit test subsystem (Submodule)
* libhdbpp - Part of the hdb++ library loading chain (Modified version of [original](https://github.com/tango-controls-hdbpp/libhdbpp) project. This will be pushed back to the original repository in time)
#### Toolchain Dependencies
If wishing to build the project, ensure the following dependencies are met:
* CMake 3.6 or higher
* C++14 compatible compiler (code base is using c++14)
#### Build Dependencies
Ensure the development version of the dependencies are installed. These are as follows:
* Tango Controls 9 or higher development headers and libraries
* omniORB release 4 or higher development headers and libraries
* libzmq3-dev or libzmq5-dev
* libpq-dev - Postgres C development library
### Building Process
To compile this library, first ensure it has been recursively cloned so all submodules are present in /thirdparty. The build system uses pkg-config to find some dependencies, for example Tango. If Tango is not installed to a standard location, set PKG_CONFIG_PATH, i.e.
```bash
export PKG_CONFIG_PATH=/non/standard/tango/install/location
```
Then to build just the library:
```bash
mkdir -p build
cd build
cmake ..
make
```
The pkg-config path can also be set with the cmake argument CMAKE_PREFIX_PATH. This can be set on the command line at configuration time, i.e.:
```bash
...
cmake -DCMAKE_PREFIX_PATH=/non/standard/tango/install/location ..
...
```
### Build Flags
The following build flags are available
#### Standard CMake Flags
The following is a list of common useful CMake flags and their use:
| Flag | Setting | Description |
|------|-----|-----|
| CMAKE_INSTALL_PREFIX | PATH | Standard CMake flag to modify the install prefix. |
| CMAKE_INCLUDE_PATH | PATH[S] | Standard CMake flag to add include paths to the search path. |
| CMAKE_LIBRARY_PATH | PATH[S] | Standard CMake flag to add paths to the library search path |
| CMAKE_BUILD_TYPE | Debug/Release | Build type to produce |
#### Project Flags
| Flag | Setting | Default | Description |
|------|-----|-----|-----|
| BUILD_UNIT_TESTS | ON/OFF | OFF | Build unit tests |
| BUILD_BENCHMARK_TESTS | ON/OFF | OFF | Build benchmark tests (Forces a Release build) |
| ENABLE_CLANG | ON/OFF | OFF | Clang code static analysis, readability, and cppcore guideline enforcement |
| FETCH_LIBHDBPP | ON/OFF | OFF | Enable to have the build fetch and use a local version of libhdbpp |
| FETCH_LIBHDBPP_TAG | | master | When FETCH_LIBHDBPP is enabled, this is the git tag to fetch |
### Running Tests
#### Unit Tests
The project has extensive unit tests to ensure its functioning as expect. Build the project with testing enabled:
```bash
mkdir -p build
cd build
cmake -DBUILD_UNIT_TESTS=ON ..
make
```
To run all unit tests, a postgresql database node is required with the project schema loaded up. There is a default connection string inside test/TestHelpers.hpp:
```
user=postgres host=localhost port=5432 dbname=hdb password=password
```
If you run the hdb timescale docker image associated with this project locally then this will connect automatically. If you wish to use a different database, edit the string in test/TestHelpers.hpp.
To run all tests:
```bash
./test/unit-tests
```
To look at the available tests and tags, should you wish to run a subset of the test suite (for example, you do not have a postgresql node to test against), then tests and be listed:
```bash
./bin/unit-tests --list-tests
```
Or:
```bash
./bin/unit-tests --list-tags
```
To see more options for the unit-test command line binary:
```bash
./bin/unit-tests --help
```
#### Benchmark Tests
These are a work in progress to explore future optimisation point. If built, they can be run as follows:
```bash
mkdir -p build
cd build
cmake -DBUILD_BENCHMARK_TESTS=ON ..
make
```
```bash
./benchmark/benchmark-tests
```
## Installing ## Installing
See [install.md](doc/install.md) in the doc folder All submodules are combined into the final library for ease of deployment. This means just the libhdbpp-timescale.so binary needs deploying to the target system.
### System Dependencies
The running system requires libpq5 installed to support the calls Postgresql. On Debian/Ubuntu this can be deployed as follows:
```bash
sudo apt-get install libpq5
```
### Installation
After the build has completed, simply run:
```
sudo make install
```
The shared library will be installed to /usr/local/lib on Debian/Ubuntu systems.
## License ## License
......
ALTER TABLE att_scalar_devboolean CLUSTER ON att_scalar_devboolean_att_conf_id_data_time_idx;
ALTER TABLE att_array_devboolean CLUSTER ON att_array_devboolean_att_conf_id_data_time_idx;
ALTER TABLE att_scalar_devuchar CLUSTER ON att_scalar_devuchar_att_conf_id_data_time_idx;
ALTER TABLE att_array_devuchar CLUSTER ON att_array_devuchar_att_conf_id_data_time_idx;
ALTER TABLE att_scalar_devshort CLUSTER ON att_scalar_devshort_att_conf_id_data_time_idx;
ALTER TABLE att_array_devshort CLUSTER ON att_array_devshort_att_conf_id_data_time_idx;
ALTER TABLE att_scalar_devushort CLUSTER ON att_scalar_devushort_att_conf_id_data_time_idx;
ALTER TABLE att_array_devushort CLUSTER ON att_array_devushort_att_conf_id_data_time_idx;
ALTER TABLE att_scalar_devlong CLUSTER ON att_scalar_devlong_att_conf_id_data_time_idx;
ALTER TABLE att_array_devlong CLUSTER ON att_array_devlong_att_conf_id_data_time_idx;
ALTER TABLE att_scalar_devulong CLUSTER ON att_scalar_devulong_att_conf_id_data_time_idx;
ALTER TABLE att_array_devulong CLUSTER ON att_array_devulong_att_conf_id_data_time_idx;
ALTER TABLE att_scalar_devlong64 CLUSTER ON att_scalar_devlong64_att_conf_id_data_time_idx;
ALTER TABLE att_array_devlong64 CLUSTER ON att_array_devlong64_att_conf_id_data_time_idx;
ALTER TABLE att_scalar_devulong64 CLUSTER ON att_scalar_devulong64_att_conf_id_data_time_idx;
ALTER TABLE att_array_devulong64 CLUSTER ON att_array_devulong64_att_conf_id_data_time_idx;
ALTER TABLE att_scalar_devfloat CLUSTER ON att_scalar_devfloat_att_conf_id_data_time_idx;
ALTER TABLE att_array_devfloat CLUSTER ON att_array_devfloat_att_conf_id_data_time_idx;
ALTER TABLE att_scalar_devdouble CLUSTER ON att_scalar_devdouble_att_conf_id_data_time_idx;
ALTER TABLE att_array_devdouble CLUSTER ON att_array_devdouble_att_conf_id_data_time_idx;
ALTER TABLE att_scalar_devstring CLUSTER ON att_scalar_devstring_att_conf_id_data_time_idx;
ALTER TABLE att_array_devstring CLUSTER ON att_array_devstring_att_conf_id_data_time_idx;
ALTER TABLE att_scalar_devstate CLUSTER ON att_scalar_devstate_att_conf_id_data_time_idx;
ALTER TABLE att_array_devstate CLUSTER ON att_array_devstate_att_conf_id_data_time_idx;
ALTER TABLE att_scalar_devencoded CLUSTER ON att_scalar_devencoded_att_conf_id_data_time_idx;
ALTER TABLE att_array_devencoded CLUSTER ON att_array_devencoded_att_conf_id_data_time_idx;
ALTER TABLE att_scalar_devenum CLUSTER ON att_scalar_devenum_att_conf_id_data_time_idx;
ALTER TABLE att_array_devenum CLUSTER ON att_array_devenum_att_conf_id_data_time_idx;
CLUSTER att_scalar_devboolean;
CLUSTER att_array_devboolean;
CLUSTER att_scalar_devuchar;
CLUSTER att_array_devuchar;
CLUSTER att_scalar_devshort;
CLUSTER att_array_devshort;
CLUSTER att_scalar_devushort;
CLUSTER att_array_devushort;
CLUSTER att_scalar_devlong;
CLUSTER att_array_devlong;
CLUSTER att_scalar_devulong;
CLUSTER att_array_devulong;
CLUSTER att_scalar_devlong64;
CLUSTER att_array_devlong64;
CLUSTER att_scalar_devulong64;
CLUSTER att_array_devulong64;
CLUSTER att_scalar_devfloat;
CLUSTER att_array_devfloat;
CLUSTER att_scalar_devdouble;
CLUSTER att_array_devdouble;
CLUSTER att_scalar_devstring;
CLUSTER att_array_devstring;
CLUSTER att_scalar_devstate;
CLUSTER att_array_devstate;
CLUSTER att_scalar_devencoded;
CLUSTER att_array_devencoded;
CLUSTER att_scalar_devenum;
CLUSTER att_array_devenum;
\ No newline at end of file
This diff is collapsed.
-- Roles
CREATE ROLE readonly;
CREATE ROLE readwrite;
-- Permissions - readonly
GRANT CONNECT ON DATABASE hdb TO readonly;
GRANT USAGE ON SCHEMA public TO readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO readonly;
-- Permissions - readwrite
GRANT CONNECT ON DATABASE hdb TO readwrite;
GRANT USAGE ON SCHEMA public TO readwrite;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO readwrite;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT USAGE ON SEQUENCES TO readwrite;
GRANT ALL ON SCHEMA public TO readwrite;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO readwrite;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO readwrite;
-- Users
CREATE ROLE hdb_cfg_man WITH LOGIN PASSWORD 'hdbpp';
GRANT readwrite TO hdb_cfg_man;
CREATE ROLE hdb_event_sub WITH LOGIN PASSWORD 'hdbpp';
GRANT readwrite TO hdb_event_sub;
CREATE ROLE hdb_java_reporter WITH LOGIN PASSWORD 'hdbpp';
GRANT readonly TO hdb_java_reporter;
\ No newline at end of file
# Table of Contents
The documentation is purely about getting the shared library running on a correctly configured database. Setup of the TimescaleDb cluster and its stack is left to the user.
- [Table of Contents](#Table-of-Contents)
- [About](#About)
- [Building and Installation](#Building-and-Installation)
- [DB Schema](#DB-Schema)
- [Configuration](#Configuration)
## About
The overview is in the main project [README](../README.md).
## Building and Installation
* [Build](build.md) instructions.
* [Installation](install.md) guidelines.
## DB Schema
* [Schema](db-schema-config.md) guidelines and setup.
## Configuration
* [Configuration](configuration.md) parameter details.
\ No newline at end of file
# Build Instructions
To build the shared library please read the following.
## Dependencies
The project has two types of dependencies, those required by the toolchain, and those to do the actual build. Other dependencies are integrated directly into the project as submodules. The following thirdparty modules exists:
* libpqxx - Modern C++ Postgresql library (Submodule)
* spdlog - Logging system (Submodule)
* Catch2 - Unit test subsystem (Submodule)
* libhdbpp - Part of the hdb++ library loading chain (Modified version of [original](https://github.com/tango-controls-hdbpp/libhdbpp) project. This will be pushed back to the original repository in time)
### Toolchain Dependencies
If wishing to build the project, ensure the following dependencies are met:
* CMake 3.6 or higher
* C++14 compatible compiler (code base is using c++14)
### Build Dependencies
Ensure the development version of the dependencies are installed. These are as follows:
* Tango Controls 9 or higher development headers and libraries
* omniORB release 4 or higher development headers and libraries
* libzmq3-dev or libzmq5-dev
* libpq-dev - Postgres C development library
## Building and Installation
To compile this library, first ensure it has been recursively cloned so all submodules are present in /thirdparty. The build system uses pkg-config to find some dependencies, for example Tango. If Tango is not installed to a standard location, set PKG_CONFIG_PATH, i.e.
```bash
export PKG_CONFIG_PATH=/non/standard/tango/install/location
```
Then to build just the library:
```bash
mkdir -p build
cd build
cmake ..
make
```
The pkg-config path can also be set with the cmake argument CMAKE_PREFIX_PATH. This can be set on the command line at configuration time, i.e.:
```bash
...
cmake -DCMAKE_PREFIX_PATH=/non/standard/tango/install/location ..
...
```
## Build Flags
The following build flags are available
### Standard CMake Flags
The following is a list of common useful CMake flags and their use:
| Flag | Setting | Description |
|------|-----|-----|
| CMAKE_INSTALL_PREFIX | PATH | Standard CMake flag to modify the install prefix. |
| CMAKE_INCLUDE_PATH | PATH[S] | Standard CMake flag to add include paths to the search path. |
| CMAKE_LIBRARY_PATH | PATH[S] | Standard CMake flag to add paths to the library search path |
| CMAKE_BUILD_TYPE | Debug/Release | Build type to produce |
### Project Flags
| Flag | Setting | Default | Description |
|------|-----|-----|-----|
| BUILD_UNIT_TESTS | ON/OFF | OFF | Build unit tests |
| BUILD_BENCHMARK_TESTS | ON/OFF | OFF | Build benchmark tests (Forces a Release build) |
| ENABLE_CLANG | ON/OFF | OFF | Clang code static analysis, readability, and cppcore guideline enforcement |
## Running Tests
### Unit Tests
The project has extensive unit tests to ensure its functioning as expect. Build the project with testing enabled:
```bash
mkdir -p build
cd build
cmake -DBUILD_UNIT_TESTS=ON ..
make
```
To run all unit tests, a postgresql database node is required with the project schema loaded up. There is a default connection string inside test/TestHelpers.hpp:
```
user=postgres host=localhost port=5432 dbname=hdb password=password
```
If you run the hdb timescale docker image associated with this project locally then this will connect automatically. If you wish to use a different database, edit the string in test/TestHelpers.hpp.
To run all tests:
```bash
./test/unit-tests
```
To look at the available tests and tags, should you wish to run a subset of the test suite (for example, you do not have a postgresql node to test against), then tests and be listed:
```bash
./bin/unit-tests --list-tests
```
Or:
```bash
./bin/unit-tests --list-tags
```
To see more options for the unit-test command line binary:
```bash
./bin/unit-tests --help
```
### Benchmark Tests
These are a work in progress to explore future optimisation point. If built, they can be run as follows:
```bash
mkdir -p build
cd build
cmake -DBUILD_BENCHMARK_TESTS=ON ..
make
```
```bash
./benchmark/benchmark-tests
```
\ No newline at end of file
# Configuration
## Library Configuration Parameters
The following configuration parameters are available. These are passed to the library via a vector of strings, the format is key=value. Normally these values are configured and passed via the LibConfiguration parameters on the EventSubscriber or ConfigManager DeviceServer.
| Parameter | Mandatory | Default | Description |
|------|-----|-----|-----|
| libname | true | None | Must be "libhdb++timescale.so" |
| connect_string | true | None | Postgres connection string, eg user=postgres host=localhost port=5432 dbname=hdb password=password |
| logging_level | false | error | Logging level. See table below |
| log_file | false | false | Enable logging to file |
| log_console | false | false | Enable logging to the console |
| log_syslog | false | false | Enable logging to syslog |
| log_file_name | false | None | When logging to file, this is the path and name of file to use. Ensure the path exists otherwise this is an error conditions. |
The logging_level parameter is case insensitive. Logging levels are as follows:
| Level | Description |
|------|-----|
| error | Log only error level events (recommended unless debugging) |
| warning | Log only warning level events |
| info | Log only warning level events |
| debug | Log only warning level events. Good for early install debugging |
| trace | Trace level logging. Excessive level of debug, good for involved debugging |
| disabled | Disable logging subsystem |
## Configuration Example
Short example LibConfiguration property value on an EventSubscriber or ConfigManager. You will HAVE to change the various parts to match your system:
```
connect_string=user=hdb-user password=password host=hdb-database port=5432 dbname=hdb
logging_level=debug
log_file=true
log_syslog=false
log_console=false
libname=libhdb++timescale.so
log_file_name=/tmp/hdb/es-name.log
````
\ No newline at end of file
# Database Schema Configuration
Schema setup and management is a very important aspect to running the HDB++ system with TimescaleDb. The following presents guidelines and a setup plan, but it is not exhaustive and additional information is welcome.
Some of the information assumes familiarity with TimescaleDb terms and technologies. Please to TimescaleDb [documentation](www.timescaledb.com) for more information.
- [Database Schema Configuration](#Database-Schema-Configuration)
- [Hypperchunk Sizes](#Hypperchunk-Sizes)
- [Schema Import](#Schema-Import)
- [Admin User](#Admin-User)
- [Table Creation](#Table-Creation)
- [Users](#Users)
- [Clean-up](#Clean-up)
- [Clustering](#Clustering)
## Hypperchunk Sizes
The [schema](../db-schema/schema.sql) file has default values set for all hyper table chunk sizes. It is assumed initial deployment data load will be smaller than the final fully operational system, so chunk sizes are as follows:
- 28 days for all data tables, except:
- 14 days for att_scalar_devdouble, since this appears to be used more often than other tables.
These values can, and should be, adjusted to the deployment situation. Please see the TimescaleDb [documentation](www.timescaledb.com) for information on choosing chunk sizes.
Important: These are initial values, the expectation is the database will be monitored and values adjusted as it takes on its full load.
## Schema Import
General setup steps.
### Admin User
Rather than create and manage the tables via a superuser, we create and admin user and have them create the tables:
```sql
CREATE ROLE hdb_admin WITH LOGIN PASSWORD 'hdbpp';
ALTER USER hdb_admin CREATEDB;
ALTER USER hdb_admin CREATEROLE;
ALTER USER hdb_admin SUPERUSER;
```
Note the SUPERUSER role will be stripped after the tables are set up.
### Table Creation
Now import the schema.sql as the hdb_admin user. From pqsl:
```bash
psql -U hdb_admin -h HOST -p PORT-f schema.sql -d template1
```
Note: we use database template1 since hdb_admin currently has no database to connect to.
We should now have a hdb database owned by hdb_admin.
### Users
Next we need to set up the users (this may require some improvements, pull requests welcome). Connect as a superuser and create two roles, a readonly and a readwrite role:
```sql
-- Roles
CREATE ROLE readonly;
CREATE ROLE readwrite;
-- Permissions - readonly
GRANT CONNECT ON DATABASE hdb TO readonly;
GRANT USAGE ON SCHEMA public TO readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO readonly;
-- Permissions - readwrite
GRANT CONNECT ON DATABASE hdb TO readwrite;
GRANT USAGE ON SCHEMA public TO readwrite;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO readwrite;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO readwrite;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT USAGE ON SEQUENCES TO readwrite;
GRANT ALL ON SCHEMA public TO readwrite;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO readwrite;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO readwrite;
-- Users
CREATE ROLE hdb_cfg_man WITH LOGIN PASSWORD 'hdbpp';
GRANT readwrite TO hdb_cfg_man;
CREATE ROLE hdb_event_sub WITH LOGIN PASSWORD 'hdbpp';
GRANT readwrite TO hdb_event_sub;
CREATE ROLE hdb_java_reporter WITH LOGIN PASSWORD 'hdbpp';
GRANT readonly TO hdb_java_reporter;
```
Here we created three users that external applications will use to connect to the database. You may create as many and in what ever role you want.
## Clean-up
Finally, strip the SUPERUSER trait from hdb_admin:
```sql
ALTER USER hdb_admin NOSUPERUSER;
```
## Clustering
To get the levels of performance required to make the solution viable we MUST cluster on the composite index of each data table. the file [cluster.sql](../db-schema/cluster.sql) contains the commands that must be run after the database has been setup.
Without this step, select performance will degrade on large tables.
As data is added, the tables will require the new data to be clustered on the index. You may choose the period and time when to do this. The process does lock the tables. Options:
- Manually
- Cron job
TimescaleDb supports a more fine grained cluster process. A tool is being developed to utilities this and run as a process to cluster on the index at regular intervals.
# Installation Instructions
All submodules are combined into the final library for ease of deployment. This means just the libhdbpp-timescale.so binary needs deploying to the target system.
## System Dependencies
The running system requires libpq5 installed to support the calls Postgresql. On Debian/Ubuntu this can be deployed as follows:
```bash
sudo apt-get install libpq5
```
## Installation
After the build has completed, simply run:
```
sudo make install
```
The shared library will be installed to /usr/local/lib on Debian/Ubuntu systems.
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment