diff --git a/CHANGELOG.md b/CHANGELOG.md
index 09db218cc59830e81f4d1b6e1534389acfa64e2f..1c65a0cbd2fd8801410977da5eb19b0253901dca 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -7,6 +7,18 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
 
 ## [Unreleased]
 
+### Changed
+
+- Moved some system documentation to hdbpp-timescale-project (the consolidated project).
+- Consolidated remaining build/install instructions into README
+- Modified build system to use fetch libhdbpp and include it when requested. This is an aid to development.
+- Supported LIBHDBPP_PROJECT_BUILD flag, that is injected into the build from hdbpp-timescale-project
+- Made compatible with new libhdbpp (namespace and path changes)
+
+### Removed
+
+- Removed the embedded version of libhdbpp (the build can now source it at build time)
+
 ## [0.11.2] - 2020-01-23
 
 ### Fixed
diff --git a/README.md b/README.md
index 7e451472eec5c9c6071592cb19bdbb3f9fb4d63b..94b0c48701798c7e707503e6a57f38c88c5115f4 100644
--- a/README.md
+++ b/README.md
@@ -8,7 +8,19 @@
   - [Bug Reports + Feature Requests](#Bug-Reports--Feature-Requests)
   - [Documentation](#Documentation)
   - [Building](#Building)
+    - [Dependencies](#Dependencies)
+      - [Toolchain Dependencies](#Toolchain-Dependencies)
+      - [Build Dependencies](#Build-Dependencies)
+    - [Building Process](#Building-Process)
+    - [Build Flags](#Build-Flags)
+      - [Standard CMake Flags](#Standard-CMake-Flags)
+      - [Project Flags](#Project-Flags)
+    - [Running Tests](#Running-Tests)
+      - [Unit Tests](#Unit-Tests)
+      - [Benchmark Tests](#Benchmark-Tests)
   - [Installing](#Installing)
+    - [System Dependencies](#System-Dependencies)
+    - [Installation](#Installation)
   - [License](#License)
 
 HDB++ backend library for the TimescaleDb extenstion to Postgresql. This library is loaded by libhdbpp to archive events from a Tango Controls system. Currently in a pre v1 release phase.
@@ -46,11 +58,164 @@ Please file the bug reports and feature requests in the issue tracker
 
 ## Building
 
-See [build.md](doc/build.md) in the doc folder
+To build the shared library please read the following.
+
+### Dependencies
+
+The project has two types of dependencies, those required by the toolchain, and those to do the actual build. Other dependencies are integrated directly into the project as submodules. The following thirdparty modules exists:
+
+* libpqxx - Modern C++ Postgresql library (Submodule)
+* spdlog - Logging system (Submodule)
+* Catch2 - Unit test subsystem (Submodule)
+* libhdbpp - Part of the hdb++ library loading chain (Modified version of [original](https://github.com/tango-controls-hdbpp/libhdbpp) project. This will be pushed back to the original repository in time)
+
+#### Toolchain Dependencies
+
+If wishing to build the project, ensure the following dependencies are met:
+
+* CMake 3.6 or higher
+* C++14 compatible compiler (code base is using c++14)
+
+#### Build Dependencies
+
+Ensure the development version of the dependencies are installed. These are as follows:
+
+* Tango Controls 9 or higher development headers and libraries
+* omniORB release 4 or higher development headers and libraries
+* libzmq3-dev or libzmq5-dev
+* libpq-dev - Postgres C development library
+
+### Building Process
+
+To compile this library, first ensure it has been recursively cloned so all submodules are present in /thirdparty. The build system uses pkg-config to find some dependencies, for example Tango. If Tango is not installed to a standard location, set PKG_CONFIG_PATH, i.e.
+
+```bash
+export PKG_CONFIG_PATH=/non/standard/tango/install/location
+```
+
+Then to build just the library:
+
+```bash
+mkdir -p build
+cd build
+cmake ..
+make
+```
+
+The pkg-config path can also be set with the cmake argument CMAKE_PREFIX_PATH. This can be set on the command line at configuration time, i.e.:
+
+```bash
+...
+cmake -DCMAKE_PREFIX_PATH=/non/standard/tango/install/location ..
+...
+```
+
+### Build Flags
+
+The following build flags are available
+
+#### Standard CMake Flags
+
+The following is a list of common useful CMake flags and their use:
+
+| Flag | Setting | Description |
+|------|-----|-----|
+| CMAKE_INSTALL_PREFIX | PATH | Standard CMake flag to modify the install prefix. |
+| CMAKE_INCLUDE_PATH | PATH[S] | Standard CMake flag to add include paths to the search path. |
+| CMAKE_LIBRARY_PATH | PATH[S] | Standard CMake flag to add paths to the library search path |
+| CMAKE_BUILD_TYPE | Debug/Release | Build type to produce |
+
+#### Project Flags
+
+| Flag | Setting | Default | Description |
+|------|-----|-----|-----|
+| BUILD_UNIT_TESTS | ON/OFF | OFF | Build unit tests |
+| BUILD_BENCHMARK_TESTS | ON/OFF | OFF | Build benchmark tests (Forces a Release build) |
+| ENABLE_CLANG | ON/OFF | OFF | Clang code static analysis, readability, and cppcore guideline enforcement |
+| FETCH_LIBHDBPP | ON/OFF | OFF | Enable to have the build fetch and use a local version of libhdbpp |
+| FETCH_LIBHDBPP_TAG | | master | When FETCH_LIBHDBPP is enabled, this is the git tag to fetch |
+
+### Running Tests
+
+#### Unit Tests
+
+The project has extensive unit tests to ensure its functioning as expect. Build the project with testing enabled:
+
+```bash
+mkdir -p build
+cd build
+cmake -DBUILD_UNIT_TESTS=ON ..
+make
+```
+
+To run all unit tests, a postgresql database node is required with the project schema loaded up. There is a default connection string inside test/TestHelpers.hpp:
+
+```
+user=postgres host=localhost port=5432 dbname=hdb password=password
+```
+
+If you run the hdb timescale docker image associated with this project locally then this will connect automatically. If you wish to use a different database, edit the string in test/TestHelpers.hpp.
+
+To run all tests:
+
+```bash
+./test/unit-tests
+```
+
+To look at the available tests and tags, should you wish to run a subset of the test suite (for example, you do not have a postgresql node to test against), then tests and be listed:
+
+```bash
+./bin/unit-tests --list-tests
+```
+
+Or:
+
+```bash
+./bin/unit-tests --list-tags
+```
+
+To see more options for the unit-test command line binary:
+
+```bash
+./bin/unit-tests --help
+```
+
+#### Benchmark Tests
+
+These are a work in progress to explore future optimisation point. If built, they can be run as follows:
+
+```bash
+mkdir -p build
+cd build
+cmake -DBUILD_BENCHMARK_TESTS=ON ..
+make
+```
+
+```bash
+./benchmark/benchmark-tests
+```
 
 ## Installing
 
-See [install.md](doc/install.md) in the doc folder
+All submodules are combined into the final library for ease of deployment. This means just the libhdbpp-timescale.so binary needs deploying to the target system.
+
+### System Dependencies
+
+The running system requires libpq5 installed to support the calls Postgresql. On Debian/Ubuntu this can be deployed as follows:
+
+```bash
+sudo apt-get install libpq5
+```
+
+### Installation
+
+After the build has completed, simply run:
+
+```
+sudo make install
+```
+
+The shared library will be installed to /usr/local/lib on Debian/Ubuntu systems.
 
 ## License
 
diff --git a/db-schema/cluster.sql b/db-schema/cluster.sql
deleted file mode 100644
index baee7e7e396ee4cb344b0abf44f96e6218cbf59a..0000000000000000000000000000000000000000
--- a/db-schema/cluster.sql
+++ /dev/null
@@ -1,57 +0,0 @@
-ALTER TABLE att_scalar_devboolean CLUSTER ON att_scalar_devboolean_att_conf_id_data_time_idx;
-ALTER TABLE att_array_devboolean CLUSTER ON att_array_devboolean_att_conf_id_data_time_idx;
-ALTER TABLE att_scalar_devuchar CLUSTER ON att_scalar_devuchar_att_conf_id_data_time_idx;
-ALTER TABLE att_array_devuchar CLUSTER ON att_array_devuchar_att_conf_id_data_time_idx;
-ALTER TABLE att_scalar_devshort CLUSTER ON att_scalar_devshort_att_conf_id_data_time_idx;
-ALTER TABLE att_array_devshort CLUSTER ON att_array_devshort_att_conf_id_data_time_idx;
-ALTER TABLE att_scalar_devushort CLUSTER ON att_scalar_devushort_att_conf_id_data_time_idx;
-ALTER TABLE att_array_devushort CLUSTER ON att_array_devushort_att_conf_id_data_time_idx;
-ALTER TABLE att_scalar_devlong CLUSTER ON att_scalar_devlong_att_conf_id_data_time_idx;
-ALTER TABLE att_array_devlong CLUSTER ON att_array_devlong_att_conf_id_data_time_idx;
-ALTER TABLE att_scalar_devulong CLUSTER ON att_scalar_devulong_att_conf_id_data_time_idx;
-ALTER TABLE att_array_devulong CLUSTER ON att_array_devulong_att_conf_id_data_time_idx;
-ALTER TABLE att_scalar_devlong64 CLUSTER ON att_scalar_devlong64_att_conf_id_data_time_idx;
-ALTER TABLE att_array_devlong64 CLUSTER ON att_array_devlong64_att_conf_id_data_time_idx;
-ALTER TABLE att_scalar_devulong64 CLUSTER ON att_scalar_devulong64_att_conf_id_data_time_idx;
-ALTER TABLE att_array_devulong64 CLUSTER ON att_array_devulong64_att_conf_id_data_time_idx;
-ALTER TABLE att_scalar_devfloat CLUSTER ON att_scalar_devfloat_att_conf_id_data_time_idx;
-ALTER TABLE att_array_devfloat CLUSTER ON att_array_devfloat_att_conf_id_data_time_idx;
-ALTER TABLE att_scalar_devdouble CLUSTER ON att_scalar_devdouble_att_conf_id_data_time_idx;
-ALTER TABLE att_array_devdouble CLUSTER ON att_array_devdouble_att_conf_id_data_time_idx;
-ALTER TABLE att_scalar_devstring CLUSTER ON att_scalar_devstring_att_conf_id_data_time_idx;
-ALTER TABLE att_array_devstring CLUSTER ON att_array_devstring_att_conf_id_data_time_idx;
-ALTER TABLE att_scalar_devstate CLUSTER ON att_scalar_devstate_att_conf_id_data_time_idx;
-ALTER TABLE att_array_devstate CLUSTER ON att_array_devstate_att_conf_id_data_time_idx;
-ALTER TABLE att_scalar_devencoded CLUSTER ON att_scalar_devencoded_att_conf_id_data_time_idx;
-ALTER TABLE att_array_devencoded CLUSTER ON att_array_devencoded_att_conf_id_data_time_idx;
-ALTER TABLE att_scalar_devenum CLUSTER ON att_scalar_devenum_att_conf_id_data_time_idx;
-ALTER TABLE att_array_devenum CLUSTER ON att_array_devenum_att_conf_id_data_time_idx;
-
-CLUSTER att_scalar_devboolean;
-CLUSTER att_array_devboolean;
-CLUSTER att_scalar_devuchar;
-CLUSTER att_array_devuchar;
-CLUSTER att_scalar_devshort;
-CLUSTER att_array_devshort;
-CLUSTER att_scalar_devushort;
-CLUSTER att_array_devushort;
-CLUSTER att_scalar_devlong;
-CLUSTER att_array_devlong;
-CLUSTER att_scalar_devulong;
-CLUSTER att_array_devulong;
-CLUSTER att_scalar_devlong64;
-CLUSTER att_array_devlong64;
-CLUSTER att_scalar_devulong64;
-CLUSTER att_array_devulong64;
-CLUSTER att_scalar_devfloat;
-CLUSTER att_array_devfloat;
-CLUSTER att_scalar_devdouble;
-CLUSTER att_array_devdouble;
-CLUSTER att_scalar_devstring;
-CLUSTER att_array_devstring;
-CLUSTER att_scalar_devstate;
-CLUSTER att_array_devstate;
-CLUSTER att_scalar_devencoded;
-CLUSTER att_array_devencoded;
-CLUSTER att_scalar_devenum;
-CLUSTER att_array_devenum;
\ No newline at end of file
diff --git a/db-schema/schema.sql b/db-schema/schema.sql
deleted file mode 100755
index 8cab5645dc6e456b38f1f7065490b12efd7fef59..0000000000000000000000000000000000000000
--- a/db-schema/schema.sql
+++ /dev/null
@@ -1,666 +0,0 @@
-DROP DATABASE IF EXISTS hdb;
-
--- Create the hdb database and use it
-CREATE DATABASE hdb;
-\c hdb
-
--- Add the timescaledb extension (Important)
-CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
-
--------------------------------------------------------------------------------
-CREATE DOMAIN uchar AS numeric(3) -- ALT smallint
-    CHECK(VALUE >= 0 AND VALUE <= 255);
-
-CREATE DOMAIN ushort AS numeric(5)  -- ALT integer
-    CHECK(VALUE >= 0 AND VALUE <= 65535);
-
-CREATE DOMAIN ulong AS numeric(10) -- ALT bigint
-    CHECK(VALUE >= 0 AND VALUE <= 4294967295);
-
-CREATE DOMAIN ulong64 AS numeric(20)
-    CHECK(VALUE >= 0 AND VALUE <= 18446744073709551615);
-
--------------------------------------------------------------------------------
-DROP TABLE IF EXISTS att_conf_type;
-
--- Mappings for ths Tango Data Type (used in att_conf)
-CREATE TABLE att_conf_type (
-    att_conf_type_id serial NOT NULL,
-    type text NOT NULL,
-    type_num smallint NOT NULL,
-    PRIMARY KEY (att_conf_type_id)
-);
-
-COMMENT ON TABLE att_conf_type is 'Attribute data type';
-
-INSERT INTO att_conf_type (type, type_num) VALUES
-('DEV_BOOLEAN', 1),('DEV_SHORT', 2),('DEV_LONG', 3),('DEV_FLOAT', 4),
-('DEV_DOUBLE', 5),('DEV_USHORT', 6),('DEV_ULONG', 7),('DEV_STRING', 8),
-('DEV_STATE', 19),('DEV_UCHAR',22),('DEV_LONG64', 23),('DEV_ULONG64', 24),
-('DEV_ENCODED', 28),('DEV_ENUM', 29);
-
-DROP TABLE IF EXISTS att_conf_format;
-
--- Mappings for ths Tango Data Format Type (used in att_conf)
-CREATE TABLE att_conf_format (
-    att_conf_format_id serial NOT NULL,
-    format text NOT NULL,
-    format_num smallint NOT NULL,
-    PRIMARY KEY (att_conf_format_id)
-);
-
-COMMENT ON TABLE att_conf_format is 'Attribute format type';
-
-INSERT INTO att_conf_format (format, format_num) VALUES
-('SCALAR', 0),('SPECTRUM', 1),('IMAGE', 2);
-
-DROP TABLE IF EXISTS att_conf_write;
-
--- Mappings for the Tango Data Write Type (used in att_conf)
-CREATE TABLE att_conf_write (
-    att_conf_write_id serial NOT NULL,
-    write text NOT NULL,
-    write_num smallint NOT NULL,
-    PRIMARY KEY (att_conf_write_id)
-);
-
-COMMENT ON TABLE att_conf_write is 'Attribute write type';
-
-INSERT INTO att_conf_write (write, write_num) VALUES
-('READ', 0),('READ_WITH_WRITE', 1),('WRITE', 2),('READ_WRITE', 3);
-
--- The att_conf table contains the primary key for all data tables, the
--- att_conf_id. Expanded on the normal hdb++ tables since we add information
--- about the type.
-CREATE TABLE IF NOT EXISTS att_conf (
-    att_conf_id serial NOT NULL,
-    att_name text NOT NULL,
-    att_conf_type_id smallint NOT NULL,
-    att_conf_format_id smallint NOT NULL,
-    att_conf_write_id smallint NOT NULL,
-    table_name text NOT NULL,
-    cs_name text NOT NULL DEFAULT '',
-    domain text NOT NULL DEFAULT '',
-    family text NOT NULL DEFAULT '',
-    member text NOT NULL DEFAULT '',
-    name text NOT NULL DEFAULT '',
-    ttl int,
-    hide boolean DEFAULT false,
-    PRIMARY KEY (att_conf_id),
-    FOREIGN KEY (att_conf_type_id) REFERENCES att_conf_type (att_conf_type_id),
-    FOREIGN KEY (att_conf_format_id) REFERENCES att_conf_format (att_conf_format_id),
-    FOREIGN KEY (att_conf_write_id) REFERENCES att_conf_write (att_conf_write_id),
-    UNIQUE (att_name)
-);
-
-COMMENT ON TABLE att_conf is 'Attribute Configuration Table';
-CREATE INDEX IF NOT EXISTS att_conf_att_conf_id_idx ON att_conf (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_conf_att_conf_type_id_idx ON att_conf (att_conf_type_id);
-
--------------------------------------------------------------------------------
-DROP TABLE IF EXISTS att_history_event;
-
-CREATE TABLE att_history_event (
-    att_history_event_id serial NOT NULL,
-    event text NOT NULL,
-    PRIMARY KEY (att_history_event_id)
-);
-
-COMMENT ON TABLE att_history_event IS 'Attribute history events description';
-CREATE INDEX IF NOT EXISTS att_history_att_history_event_id_idx ON att_history_event (att_history_event_id);
-
-CREATE TABLE IF NOT EXISTS att_history (
-    att_conf_id integer NOT NULL,
-    att_history_event_id integer NOT NULL,
-    event_time timestamp WITH TIME ZONE,
-    details json,
-    PRIMARY KEY (att_conf_id, event_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_history_event_id) REFERENCES att_history_event (att_history_event_id)
-);
-
-COMMENT ON TABLE att_history is 'Attribute Configuration Events History Table';
-CREATE INDEX IF NOT EXISTS att_history_att_conf_id_inx ON att_history (att_conf_id);
-
--------------------------------------------------------------------------------
-CREATE TABLE IF NOT EXISTS att_parameter (
-    att_conf_id integer NOT NULL,
-    recv_time timestamp WITH TIME ZONE NOT NULL,
-    label text NOT NULL DEFAULT '',
-    unit text NOT NULL DEFAULT '',
-    standard_unit text NOT NULL DEFAULT '',
-    display_unit text NOT NULL DEFAULT '',
-    format text NOT NULL DEFAULT '',
-    archive_rel_change text NOT NULL DEFAULT '',
-    archive_abs_change text NOT NULL DEFAULT '',
-    archive_period text NOT NULL DEFAULT '',
-    description text NOT NULL DEFAULT '',
-    details json,
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id)
-);
-
-COMMENT ON TABLE att_parameter IS 'Attribute configuration parameters';
-CREATE INDEX IF NOT EXISTS att_parameter_recv_time_idx ON att_parameter (recv_time);
-CREATE INDEX IF NOT EXISTS att_parameter_att_conf_id_idx ON  att_parameter (att_conf_id);
-SELECT create_hypertable('att_parameter', 'recv_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
--------------------------------------------------------------------------------
-CREATE TABLE IF NOT EXISTS att_error_desc (
-    att_error_desc_id serial NOT NULL,
-    error_desc text NOT NULL,
-    PRIMARY KEY (att_error_desc_id),
-    UNIQUE (error_desc)
-);
-
-COMMENT ON TABLE att_error_desc IS 'Error Description Table';
-CREATE INDEX IF NOT EXISTS att_error_desc_att_error_desc_id_idx ON att_error_desc (att_error_desc_id);
-
--------------------------------------------------------------------------------
-CREATE TABLE IF NOT EXISTS att_scalar_devboolean (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r boolean,
-    value_w boolean,
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_scalar_devboolean IS 'Scalar Boolean Values Table';
-CREATE INDEX IF NOT EXISTS att_scalar_devboolean_att_conf_id_idx ON att_scalar_devboolean (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_scalar_devboolean_att_conf_id_data_time_idx ON att_scalar_devboolean (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_scalar_devboolean', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_array_devboolean (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r boolean[],
-    value_w boolean[],
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_array_devboolean IS 'Array Boolean Values Table';
-CREATE INDEX IF NOT EXISTS att_array_devboolean_att_conf_id_idx ON att_array_devboolean (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_array_devboolean_att_conf_id_data_time_idx ON att_array_devboolean (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_array_devboolean', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_scalar_devuchar (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r uchar,
-    value_w uchar,
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_scalar_devuchar IS 'Scalar UChar Values Table';
-CREATE INDEX IF NOT EXISTS att_scalar_devuchar_att_conf_id_idx ON att_scalar_devuchar (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_scalar_devuchar_att_conf_id_data_time_idx ON att_scalar_devuchar (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_scalar_devuchar', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_array_devuchar (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r uchar[],
-    value_w uchar[],
-    quality smallint,
-    details json,
-    att_error_desc_id integer,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_array_devuchar IS 'Array UChar Values Table';
-CREATE INDEX IF NOT EXISTS att_array_devuchar_att_conf_id_idx ON att_array_devuchar (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_array_devuchar_att_conf_id_data_time_idx ON att_array_devuchar (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_array_devuchar', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_scalar_devshort (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r smallint,
-    value_w smallint,
-    quality smallint,
-    details json,
-    att_error_desc_id integer,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_scalar_devshort IS 'Scalar Short Values Table';
-CREATE INDEX IF NOT EXISTS att_scalar_devshort_att_conf_id_idx ON att_scalar_devshort (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_scalar_devshort_att_conf_id_data_time_idx ON att_scalar_devshort (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_scalar_devshort', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_array_devshort (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r smallint[],
-    value_w smallint[],
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_array_devshort IS 'Array Short Values Table';
-CREATE INDEX IF NOT EXISTS att_array_devshort_att_conf_id_idx ON att_array_devshort (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_array_devshort_att_conf_id_data_time_idx ON att_array_devshort (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_array_devshort', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_scalar_devushort (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r ushort,
-    value_w ushort,
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_scalar_devushort IS 'Scalar UShort Values Table';
-CREATE INDEX IF NOT EXISTS att_scalar_devushort_att_conf_id_idx ON att_scalar_devushort (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_scalar_devushort_att_conf_id_data_time_idx ON att_scalar_devushort (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_scalar_devushort', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_array_devushort (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r ushort[],
-    value_w ushort[],
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_array_devushort IS 'Array UShort Values Table';
-CREATE INDEX IF NOT EXISTS att_array_devushort_att_conf_id_idx ON att_array_devushort (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_array_devushort_att_conf_id_data_time_idx ON att_array_devushort (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_array_devushort', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_scalar_devlong (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r integer,
-    value_w integer,
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_scalar_devlong IS 'Scalar Long Values Table';
-CREATE INDEX IF NOT EXISTS att_scalar_devlong_att_conf_id_idx ON att_scalar_devlong (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_scalar_devlong_att_conf_id_data_time_idx ON att_scalar_devlong (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_scalar_devlong', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_array_devlong (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r integer[],
-    value_w integer[],
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_array_devlong IS 'Array Long Values Table';
-CREATE INDEX IF NOT EXISTS att_array_devlong_att_conf_id_idx ON att_array_devlong (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_array_devlong_att_conf_id_data_time_idx ON att_array_devlong (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_array_devlong', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_scalar_devulong (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r ulong,
-    value_w ulong,
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_scalar_devulong IS 'Scalar ULong Values Table';
-CREATE INDEX IF NOT EXISTS att_scalar_devulong_att_conf_id_idx ON att_scalar_devulong (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_scalar_devulong_att_conf_id_data_time_idx ON att_scalar_devulong (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_scalar_devulong', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_array_devulong (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r ulong[],
-    value_w ulong[],
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_array_devulong IS 'Array ULong Values Table';
-CREATE INDEX IF NOT EXISTS att_array_devulong_att_conf_id_idx ON att_array_devulong (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_array_devulong_att_conf_id_data_time_idx ON att_array_devulong (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_array_devulong', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_scalar_devlong64 (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r bigint,
-    value_w bigint,
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_scalar_devlong64 IS 'Scalar Long64 Values Table';
-CREATE INDEX IF NOT EXISTS att_scalar_devlong64_att_conf_id_idx ON att_scalar_devlong64 (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_scalar_devlong64_att_conf_id_data_time_idx ON att_scalar_devlong64 (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_scalar_devlong64', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_array_devlong64 (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r bigint[],
-    value_w bigint[],
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_array_devlong64 IS 'Array Long64 Values Table';
-CREATE INDEX IF NOT EXISTS att_array_devlong64_att_conf_id_idx ON att_array_devlong64 (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_array_devlong64_att_conf_id_data_time_idx ON att_array_devlong64 (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_array_devlong64', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_scalar_devulong64 (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r ulong64,
-    value_w ulong64,
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_scalar_devulong64 IS 'Scalar ULong64 Values Table';
-CREATE INDEX IF NOT EXISTS att_scalar_devulong64_att_conf_id_idx ON att_scalar_devulong64 (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_scalar_devulong64_att_conf_id_data_time_idx ON att_scalar_devulong64 (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_scalar_devulong64', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_array_devulong64 (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r ulong64[],
-    value_w ulong64[],
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_array_devulong64 IS 'Array ULong64 Values Table';
-CREATE INDEX IF NOT EXISTS att_array_devulong64_att_conf_id_idx ON att_array_devulong64 (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_array_devulong64_att_conf_id_data_time_idx ON att_array_devulong64 (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_array_devulong64', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_scalar_devfloat (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r real,
-    value_w real,
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_scalar_devfloat IS 'Scalar Float Values Table';
-CREATE INDEX IF NOT EXISTS att_scalar_devfloat_att_conf_id_idx ON att_scalar_devfloat (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_scalar_devfloat_att_conf_id_data_time_idx ON att_scalar_devfloat (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_scalar_devfloat', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_array_devfloat (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r real[],
-    value_w real[],
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_array_devfloat IS 'Array Float Values Table';
-CREATE INDEX IF NOT EXISTS att_array_devfloat_att_conf_id_idx ON att_array_devfloat (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_array_devfloat_att_conf_id_data_time_idx ON att_array_devfloat (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_array_devfloat', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_scalar_devdouble (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r double precision,
-    value_w double precision,
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_scalar_devdouble IS 'Scalar Double Values Table';
-CREATE INDEX IF NOT EXISTS att_scalar_devdouble_att_conf_id_idx ON att_scalar_devdouble (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_scalar_devdouble_att_conf_id_data_time_idx ON att_scalar_devdouble (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_scalar_devdouble', 'data_time', chunk_time_interval => interval '14 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_array_devdouble (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r double precision[],
-    value_w double precision[],
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_array_devdouble IS 'Array Double Values Table';
-CREATE INDEX IF NOT EXISTS att_array_devdouble_att_conf_id_idx ON att_array_devdouble (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_array_devdouble_att_conf_id_data_time_idx ON att_array_devdouble (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_array_devdouble', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_scalar_devstring (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r text,
-    value_w text,
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_scalar_devstring IS 'Scalar String Values Table';
-CREATE INDEX IF NOT EXISTS att_scalar_devstring_att_conf_id_idx ON att_scalar_devstring (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_scalar_devstring_att_conf_id_data_time_idx ON att_scalar_devstring (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_scalar_devstring', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_array_devstring (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r text[],
-    value_w text[],
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_array_devstring IS 'Array String Values Table';
-CREATE INDEX IF NOT EXISTS att_array_devstring_att_conf_id_idx ON att_array_devstring (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_array_devstring_att_conf_id_data_time_idx ON att_array_devstring (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_array_devstring', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_scalar_devstate (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r integer,
-    value_w integer,
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_scalar_devstate IS 'Scalar State Values Table';
-CREATE INDEX IF NOT EXISTS att_scalar_devstate_att_conf_id_idx ON att_scalar_devstate (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_scalar_devstate_att_conf_id_data_time_idx ON att_scalar_devstate (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_scalar_devstate', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_array_devstate (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r integer[],
-    value_w integer[],
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_array_devstate IS 'Array State Values Table';
-CREATE INDEX IF NOT EXISTS att_array_devstate_att_conf_id_idx ON att_array_devstate (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_array_devstate_att_conf_id_data_time_idx ON att_array_devstate (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_array_devstate', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_scalar_devencoded (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r bytea,
-    value_w bytea,
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-COMMENT ON TABLE att_scalar_devencoded IS 'Scalar DevEncoded Values Table';
-CREATE INDEX IF NOT EXISTS att_scalar_devencoded_att_conf_id_idx ON att_scalar_devencoded (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_scalar_devencoded_att_conf_id_data_time_idx ON att_scalar_devencoded (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_scalar_devencoded', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_array_devencoded (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r bytea[],
-    value_w bytea[],
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-COMMENT ON TABLE att_array_devencoded IS 'Array DevEncoded Values Table';
-CREATE INDEX IF NOT EXISTS att_array_devencoded_att_conf_id_idx ON att_array_devencoded (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_array_devencoded_att_conf_id_data_time_idx ON att_array_devencoded (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_array_devencoded', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
--- The Enum tables are unique in that they store a value and text label for 
--- each data point
-CREATE TABLE IF NOT EXISTS att_scalar_devenum (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r_label text,
-    value_r smallint,
-    value_w_label text,
-    value_w smallint,
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_scalar_devenum IS 'Scalar Enum Values Table';
-CREATE INDEX IF NOT EXISTS att_scalar_devenum_att_conf_id_idx ON att_scalar_devenum (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_scalar_devenum_att_conf_id_data_time_idx ON att_scalar_devenum (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_scalar_devenum', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
-CREATE TABLE IF NOT EXISTS att_array_devenum (
-    att_conf_id integer NOT NULL,
-    data_time timestamp WITH TIME ZONE NOT NULL,
-    value_r_label text[],
-    value_r smallint[],
-    value_w_label text[],
-    value_w smallint[],
-    quality smallint,
-    att_error_desc_id integer,
-    details json,
-    PRIMARY KEY (att_conf_id, data_time),
-    FOREIGN KEY (att_conf_id) REFERENCES att_conf (att_conf_id),
-    FOREIGN KEY (att_error_desc_id) REFERENCES att_error_desc (att_error_desc_id)
-);
-
-COMMENT ON TABLE att_array_devenum IS 'Array Enum Values Table';
-CREATE INDEX IF NOT EXISTS att_array_devenum_att_conf_id_idx ON att_array_devenum (att_conf_id);
-CREATE INDEX IF NOT EXISTS att_array_devenum_att_conf_id_data_time_idx ON att_array_devenum (att_conf_id,data_time DESC);
-SELECT create_hypertable('att_array_devenum', 'data_time', chunk_time_interval => interval '28 day', create_default_indexes => FALSE);
-
diff --git a/db-schema/users.sql b/db-schema/users.sql
deleted file mode 100644
index 2949ef8bc3cbd8245ca2584aca10b84ff4122b9e..0000000000000000000000000000000000000000
--- a/db-schema/users.sql
+++ /dev/null
@@ -1,30 +0,0 @@
--- Roles
-CREATE ROLE readonly;
-CREATE ROLE readwrite;
-
--- Permissions - readonly
-GRANT CONNECT ON DATABASE hdb TO readonly;
-GRANT USAGE ON SCHEMA public TO readonly;
-GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;
-ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO readonly;
-
--- Permissions - readwrite
-GRANT CONNECT ON DATABASE hdb TO readwrite;
-GRANT USAGE ON SCHEMA public TO readwrite;
-GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO readwrite;
-ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO readwrite;
-GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO readwrite;
-ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT USAGE ON SEQUENCES TO readwrite;
-GRANT ALL ON SCHEMA public TO readwrite;
-GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO readwrite;
-GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO readwrite;
-
--- Users
-CREATE ROLE hdb_cfg_man WITH LOGIN PASSWORD 'hdbpp';
-GRANT readwrite TO hdb_cfg_man;
-
-CREATE ROLE hdb_event_sub WITH LOGIN PASSWORD 'hdbpp';
-GRANT readwrite TO hdb_event_sub;
-
-CREATE ROLE hdb_java_reporter WITH LOGIN PASSWORD 'hdbpp';
-GRANT readonly TO hdb_java_reporter;
\ No newline at end of file
diff --git a/doc/README.md b/doc/README.md
deleted file mode 100644
index 299a5329bd77834854fc35260d1916fcd6a9c363..0000000000000000000000000000000000000000
--- a/doc/README.md
+++ /dev/null
@@ -1,26 +0,0 @@
-# Table of Contents
-
-The documentation is purely about getting the shared library running on a correctly configured database. Setup of the TimescaleDb cluster and its stack is left to the user.
-
-- [Table of Contents](#Table-of-Contents)
-  - [About](#About)
-  - [Building and Installation](#Building-and-Installation)
-  - [DB Schema](#DB-Schema)
-  - [Configuration](#Configuration)
-
-## About
-
-The overview is in the main project [README](../README.md).
-
-## Building and Installation
-
-* [Build](build.md) instructions.
-* [Installation](install.md) guidelines.
-
-## DB Schema
-
-* [Schema](db-schema-config.md) guidelines and setup.
-
-## Configuration
-
-* [Configuration](configuration.md) parameter details.
\ No newline at end of file
diff --git a/doc/build.md b/doc/build.md
deleted file mode 100644
index 3f6496096e3f2c56538eeb00d623189177452306..0000000000000000000000000000000000000000
--- a/doc/build.md
+++ /dev/null
@@ -1,136 +0,0 @@
-# Build Instructions
-
-To build the shared library please read the following.
-
-## Dependencies
-
-The project has two types of dependencies, those required by the toolchain, and those to do the actual build. Other dependencies are integrated directly into the project as submodules. The following thirdparty modules exists:
-
-* libpqxx - Modern C++ Postgresql library (Submodule)
-* spdlog - Logging system (Submodule)
-* Catch2 - Unit test subsystem (Submodule)
-* libhdbpp - Part of the hdb++ library loading chain (Modified version of [original](https://github.com/tango-controls-hdbpp/libhdbpp) project. This will be pushed back to the original repository in time)
-
-### Toolchain Dependencies
-
-If wishing to build the project, ensure the following dependencies are met:
-
-* CMake 3.6 or higher
-* C++14 compatible compiler (code base is using c++14)
-
-### Build Dependencies
-
-Ensure the development version of the dependencies are installed. These are as follows:
-
-* Tango Controls 9 or higher development headers and libraries
-* omniORB release 4 or higher development headers and libraries
-* libzmq3-dev or libzmq5-dev
-* libpq-dev - Postgres C development library
-
-## Building and Installation
-
-To compile this library, first ensure it has been recursively cloned so all submodules are present in /thirdparty. The build system uses pkg-config to find some dependencies, for example Tango. If Tango is not installed to a standard location, set PKG_CONFIG_PATH, i.e.
-
-```bash
-export PKG_CONFIG_PATH=/non/standard/tango/install/location
-```
-
-Then to build just the library:
-
-```bash
-mkdir -p build
-cd build
-cmake ..
-make
-```
-
-The pkg-config path can also be set with the cmake argument CMAKE_PREFIX_PATH. This can be set on the command line at configuration time, i.e.:
-
-```bash
-...
-cmake -DCMAKE_PREFIX_PATH=/non/standard/tango/install/location ..
-...
-```
-
-## Build Flags
-
-The following build flags are available
-
-### Standard CMake Flags
-
-The following is a list of common useful CMake flags and their use:
-
-| Flag | Setting | Description |
-|------|-----|-----|
-| CMAKE_INSTALL_PREFIX | PATH | Standard CMake flag to modify the install prefix. |
-| CMAKE_INCLUDE_PATH | PATH[S] | Standard CMake flag to add include paths to the search path. |
-| CMAKE_LIBRARY_PATH | PATH[S] | Standard CMake flag to add paths to the library search path |
-| CMAKE_BUILD_TYPE | Debug/Release | Build type to produce |
-
-### Project Flags
-
-| Flag | Setting | Default | Description |
-|------|-----|-----|-----|
-| BUILD_UNIT_TESTS | ON/OFF | OFF | Build unit tests |
-| BUILD_BENCHMARK_TESTS | ON/OFF | OFF | Build benchmark tests (Forces a Release build) |
-| ENABLE_CLANG | ON/OFF | OFF | Clang code static analysis, readability, and cppcore guideline enforcement |
-
-## Running Tests
-
-### Unit Tests
-
-The project has extensive unit tests to ensure its functioning as expect. Build the project with testing enabled:
-
-```bash
-mkdir -p build
-cd build
-cmake -DBUILD_UNIT_TESTS=ON ..
-make
-```
-
-To run all unit tests, a postgresql database node is required with the project schema loaded up. There is a default connection string inside test/TestHelpers.hpp:
-
-```
-user=postgres host=localhost port=5432 dbname=hdb password=password
-```
-
-If you run the hdb timescale docker image associated with this project locally then this will connect automatically. If you wish to use a different database, edit the string in test/TestHelpers.hpp.
-
-To run all tests:
-
-```bash
-./test/unit-tests
-```
-
-To look at the available tests and tags, should you wish to run a subset of the test suite (for example, you do not have a postgresql node to test against), then tests and be listed:
-
-```bash
-./bin/unit-tests --list-tests
-```
-
-Or:
-
-```bash
-./bin/unit-tests --list-tags
-```
-
-To see more options for the unit-test command line binary:
-
-```bash
-./bin/unit-tests --help
-```
-
-### Benchmark Tests
-
-These are a work in progress to explore future optimisation point. If built, they can be run as follows:
-
-```bash
-mkdir -p build
-cd build
-cmake -DBUILD_BENCHMARK_TESTS=ON ..
-make
-```
-
-```bash
-./benchmark/benchmark-tests
-```
\ No newline at end of file
diff --git a/doc/configuration.md b/doc/configuration.md
deleted file mode 100644
index e23e89a513218ae3e042d2cf91e3163100ffb23f..0000000000000000000000000000000000000000
--- a/doc/configuration.md
+++ /dev/null
@@ -1,41 +0,0 @@
-# Configuration
-
-## Library Configuration Parameters
-
-The following configuration parameters are available. These are passed to the library via a vector of strings, the format is key=value. Normally these values are configured and passed via the LibConfiguration parameters on the EventSubscriber or ConfigManager DeviceServer.
-
-
-| Parameter | Mandatory | Default | Description |
-|------|-----|-----|-----|
-| libname | true | None | Must be "libhdb++timescale.so" |
-| connect_string | true | None | Postgres connection string, eg user=postgres host=localhost port=5432 dbname=hdb password=password |
-| logging_level | false | error | Logging level. See table below |
-| log_file | false | false | Enable logging to file |
-| log_console | false | false | Enable logging to the console |
-| log_syslog | false | false | Enable logging to syslog |
-| log_file_name | false | None | When logging to file, this is the path and name of file to use. Ensure the path exists otherwise this is an error conditions. |
-
-The logging_level parameter is case insensitive. Logging levels are as follows:
-
-| Level | Description |
-|------|-----|
-| error | Log only error level events (recommended unless debugging) |
-| warning | Log only warning level events |
-| info | Log only warning level events |
-| debug | Log only warning level events. Good for early install debugging |
-| trace | Trace level logging. Excessive level of debug, good for involved debugging |
-| disabled | Disable logging subsystem |
-
-## Configuration Example
-
-Short example LibConfiguration property value on an EventSubscriber or ConfigManager. You will HAVE to change the various parts to match your system:
-
-```
-connect_string=user=hdb-user password=password host=hdb-database port=5432 dbname=hdb
-logging_level=debug
-log_file=true
-log_syslog=false
-log_console=false
-libname=libhdb++timescale.so
-log_file_name=/tmp/hdb/es-name.log
-````
\ No newline at end of file
diff --git a/doc/db-schema-config.md b/doc/db-schema-config.md
deleted file mode 100644
index 8c0e6ea16fb90fa02b8894ae24dc1ea3aa8314c6..0000000000000000000000000000000000000000
--- a/doc/db-schema-config.md
+++ /dev/null
@@ -1,114 +0,0 @@
-# Database Schema Configuration
-
-Schema setup and management is a very important aspect to running the HDB++ system with TimescaleDb. The following presents guidelines and a setup plan, but it is not exhaustive and additional information is welcome.
-
-Some of the information assumes familiarity with TimescaleDb terms and technologies. Please to TimescaleDb [documentation](www.timescaledb.com) for more information.
-
-- [Database Schema Configuration](#Database-Schema-Configuration)
-  - [Hypperchunk Sizes](#Hypperchunk-Sizes)
-  - [Schema Import](#Schema-Import)
-    - [Admin User](#Admin-User)
-    - [Table Creation](#Table-Creation)
-    - [Users](#Users)
-  - [Clean-up](#Clean-up)
-  - [Clustering](#Clustering)
-
-## Hypperchunk Sizes
-
-The [schema](../db-schema/schema.sql) file has default values set for all hyper table chunk sizes. It is assumed initial deployment data load will be smaller than the final fully operational system, so chunk sizes are as follows:
-
-- 28 days for all data tables, except:
-- 14 days for att_scalar_devdouble, since this appears to be used more often than other tables.
-
-These values can, and should be, adjusted to the deployment situation. Please see the TimescaleDb [documentation](www.timescaledb.com) for information on choosing chunk sizes.
-
-Important: These are initial values, the expectation is the database will be monitored and values adjusted as it takes on its full load.
-
-## Schema Import
-
-General setup steps.
-
-### Admin User
-
-Rather than create and manage the tables via a superuser, we create and admin user and have them create the tables:
-
-```sql
-CREATE ROLE hdb_admin WITH LOGIN PASSWORD 'hdbpp';
-ALTER USER hdb_admin CREATEDB;
-ALTER USER hdb_admin CREATEROLE;
-ALTER USER hdb_admin SUPERUSER;
-```
-
-Note the SUPERUSER role will be stripped after the tables are set up.
-
-### Table Creation
-
-Now import the schema.sql as the hdb_admin user. From pqsl:
-
-```bash
-psql -U hdb_admin -h HOST -p PORT-f schema.sql  -d template1
-```
-
-Note: we use database template1 since hdb_admin currently has no database to connect to.
-
-We should now have a hdb database owned by hdb_admin.
-
-### Users
-
-Next we need to set up the users (this may require some improvements, pull requests welcome). Connect as a superuser and create two roles, a readonly and a readwrite role:
-
-```sql
--- Roles
-CREATE ROLE readonly;
-CREATE ROLE readwrite;
-
--- Permissions - readonly
-GRANT CONNECT ON DATABASE hdb TO readonly;
-GRANT USAGE ON SCHEMA public TO readonly;
-GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;
-ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO readonly;
-
--- Permissions - readwrite
-GRANT CONNECT ON DATABASE hdb TO readwrite;
-GRANT USAGE ON SCHEMA public TO readwrite;
-GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO readwrite;
-ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO readwrite;
-GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO readwrite;
-ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT USAGE ON SEQUENCES TO readwrite;
-GRANT ALL ON SCHEMA public TO readwrite;
-GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO readwrite;
-GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO readwrite;
-
--- Users
-CREATE ROLE hdb_cfg_man WITH LOGIN PASSWORD 'hdbpp';
-GRANT readwrite TO hdb_cfg_man;
-
-CREATE ROLE hdb_event_sub WITH LOGIN PASSWORD 'hdbpp';
-GRANT readwrite TO hdb_event_sub;
-
-CREATE ROLE hdb_java_reporter WITH LOGIN PASSWORD 'hdbpp';
-GRANT readonly TO hdb_java_reporter;
-```
-
-Here we created three users that external applications will use to connect to the database. You may create as many and in what ever role you want.
-
-## Clean-up
-
-Finally, strip the SUPERUSER trait from hdb_admin:
-
-```sql
-ALTER USER hdb_admin NOSUPERUSER;
-```
-
-## Clustering
-
-To get the levels of performance required to make the solution viable we MUST cluster on the composite index of each data table. the file [cluster.sql](../db-schema/cluster.sql) contains the commands that must be run after the database has been setup.
-
-Without this step, select performance will degrade on large tables.
-
-As data is added, the tables will require the new data to be clustered on the index. You may choose the period and time when to do this. The process does lock the tables. Options:
-
-- Manually
-- Cron job
-
-TimescaleDb supports a more fine grained cluster process. A tool is being developed to utilities this and run as a process to cluster on the index at regular intervals.
diff --git a/doc/install.md b/doc/install.md
deleted file mode 100644
index be187200a9d7a8fa8070d34a7324a3957bae871f..0000000000000000000000000000000000000000
--- a/doc/install.md
+++ /dev/null
@@ -1,21 +0,0 @@
-# Installation Instructions
-
-All submodules are combined into the final library for ease of deployment. This means just the libhdbpp-timescale.so binary needs deploying to the target system.
-
-## System Dependencies
-
-The running system requires libpq5 installed to support the calls Postgresql. On Debian/Ubuntu this can be deployed as follows:
-
-```bash
-sudo apt-get install libpq5
-```
-
-## Installation
-
-After the build has completed, simply run:
-
-```
-sudo make install
-```
-
-The shared library will be installed to /usr/local/lib on Debian/Ubuntu systems.
\ No newline at end of file