Skip to content
Snippets Groups Projects
Commit bd322f0b authored by David Rafferty's avatar David Rafferty
Browse files

Merge branch 'master' into RAP-199

Former-commit-id: f35a68f1
parents 053de817 50698113
No related branches found
No related tags found
No related merge requests found
Showing
with 777 additions and 0 deletions
docs/source/flux_ratio_sky.png

75.1 KiB

docs/source/flux_ratio_vs_distance.png

37.2 KiB

docs/source/flux_ratio_vs_flux.png

37.3 KiB

docs/source/fr.png

262 KiB

.. _help:
Getting help
============
**prefactor** is a continuously maintained and developed software package.
If you need help or want to report any issues or bugs, follow these links:
- `Prefactor GitHub issues`_
- Frequently Asked Questions (`FAQ`_)
.. _Prefactor GitHub issues: https://github.com/lofar-astron/prefactor/issues
.. _FAQ: https://github.com/lofar-astron/prefactor/wiki/Documentation%3A-Faq
\ No newline at end of file
.. _image_pipeline:
Image pipeline
==============
This pipeline produces a Stokes-I image (a Stokes-V image is also produced for quality-control purposes) of the full FOV of the target data, using the full bandwidth. The
parset is named ``Pre-Facet-Image.parset``.
Prepare target
--------------
The target data that result from the target pipeline are averaged and concatenated in preparation for imaging. The steps
are as follows:
``create_ms_map``
Generate a mapfile of all the target data.
``combine_mapfile``
Generate a mapfile with all files in a single entry. This mapfile is used as
input to the next step.
``do_magic``
Compute image sizes and the number of channels to use during imaging from the MS
files from the previous step. The image size is calculated from the FWHM of the
primary beam at the lowest frequency at the mean elevation of the observation. The
number of channels is set simply as the number of subbands / 40, to result in
enough channels to allow multi-frequency synthesis (MFS), but not so many that
performance is impacted. A minimum of 2 channels is used.
``do_magic_maps``
Convert the output of do_magic into usable mapfiles.
``average``
Average the data as appropriate for imaging of the FOV. The amount of averaging
depends on the size of the image (to limit bandwidth and time smearing). The
averaging currently adopted is 16 s per time slot and 0.2 MHz per channel. These
values result in low levels of bandwidth and time smearing for the target image
sizes and resolutions.
``combine_mapfile_deep``
Generate a mapfile with all files in a single entry. This mapfile is used as
input to the next step.
``dpppconcat``
Run DPPP to concatenate the data. Concatenating the data speeds up gridding
and degridding with IDG by factors of several.
Imaging
-------
WSClean is used to produce the Stokes-I/V images. See the parset and the ``do_magic`` step above
for details of the parameters used. The values are chosen to produce good results for most
standard observations.
``wsclean_high_deep``
Image the data with WSClean+IDG. Imaging is done in MFS mode, resulting in a
single image for the full bandwidth. Primary-beam corrected and uncorrected images are
made.
``plot_im_high_i/v``
Make a png figure of the Stokes-I/V images, including estimates of the image rms and dynamic
range and the restoring beam size. Typical HBA images look like the ones below (Stokes-I image is shown first and the Stokes-V image second).
.. image:: MFS-I-image-pb.plot_im_high_i.png
.. image:: MFS-V-image-pb.plot_im_high_v.png
``make_source_list``
Make a list of sources from the Stokes-I image using PyBDSF and compare their properties to
those of the TGSS and GSM catalogs for HBA and LBA data, respectively. A number of plots
are made to allow quick assessment of the flux scale and astrometry of the image:
.. image:: flux_ratio_sky.png
.. image:: flux_ratio_vs_distance.png
.. image:: flux_ratio_vs_flux.png
.. image:: positional_offsets_sky.png
User-defined parameter configuration
------------------------------------
*Information about the input data*
``! target_input_path``
Directory where your concatenated target data are stored.
``! target_input_pattern``
Regular expression pattern of all your target files.
.. note::
These files should have the direction-independent calibration applied to the DATA
column (usually the ``*.pre-cal.ms`` files from the target pipeline).
*Imaging parameters*
- ``cellsize_highres_deg``
Cellsize in degrees (default: 0.00208).
- ``fieldsize_highres``
Size of the image is this value times the FWHM of mean semi-major axis of
the station beam at the lowest observed frequency (default: 1.5).
- ``maxlambda_highres``
Maximum uv-distance in lambda that will be used for imaging. A minimum uv-distance
of 80 lambda is used in all cases (default: 7000).
- ``image_padding``
Amount of padding to add during the imaging (default: 1.4).
- ``idg_mode``
IDG mode to use: cpu or hybrid (default: cpu).
- ``local_scratch_dir``
Scratch directory for WSClean (default: ``{{ job_directory }}``).
- ``image_rootname``
Output image root name (default: ``{{ job_directory }}/fullband``). The image will be named ``image_rootname-MFS-I-image.fits``.
Parameters for **HBA** and **LBA** observations
-----------------------------------------------
======================== ======= =======
**parameter** **HBA** **LBA**
------------------------ ------- -------
``cellsize_highres_deg`` 0.00208 0.00324
``maxlambda_highres`` 7000 4000
======================== ======= =======
docs/source/image_pipeline_example.png

1.21 MiB

.. Prefactor documentation master file, created by
sphinx-quickstart on Tue Nov 27 11:30:20 2018.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Prefactor: Preprocessing for Facet Calibration for LOFAR
========================================================
**prefactor** is a pipeline to correct for various instrumental and ionospheric effects in both **LOFAR HBA** and **LOFAR LBA** observations.
It includes:
- removal of clock offsets between core and remote stations (using clock-TEC separation)
- correction of the polarization alignment between XX and YY
- robust time-independent bandpass correction
- ionospheric RM corrections with `RMextract`_
- removal of the element beam
- advanced flagging and interpolation of bad data
- mitigation of broad-band RFI and bad stations
- direction-independent phase correction of the target, using a global sky model from `TGSS ADR`_ or the new `Global Sky Model`_ (GSM)
- detailed diagnostics
It will prepare your data to allow you continuing the processing with any direction-dependent calibration software, like `Rapthor`_, `factor`_ or `killMS`_.
.. note::
If you intend to use **prefactor** for the processing of long baselines (international stations) the user is referred to the `LOFAR-VLBI documentation`_, since the compatibility of both pipelines may be limited to certain version or releases. It is recommended to stick to the instructions therein.
**Note:** The current version of **prefactor** does not yet support state-of-the-art calibration of target fields with the **LOFAR LBA**. An implementation is planned for upcoming releases. Please continue your processing using `LiLF`_.
Introduction
------------
.. toctree::
:maxdepth: 2
acknowledgements
Obtaining Prefactor
-------------------
.. toctree::
:maxdepth: 2
installation
changelog
Setting Up and Running Prefactor
--------------------------------
.. toctree::
:maxdepth: 2
preparation
parset
running
help
The Prefactor Pipelines
-----------------------
.. toctree::
:maxdepth: 2
pipelineoverview
calibrator
target
.. _LOFAR-VLBI documentation: https://lofar-vlbi.readthedocs.io/en/latest/
.. _Rapthor: https://github.com/darafferty/rapthor
.. _Global Sky Model: https://lcs165.lofar.eu/
.. _TGSS ADR: https://http://tgssadr.strw.leidenuniv.nl/
.. _RMextract: https://github.com/lofar-astron/RMextract/
.. _factor: https://github.com/lofar-astron/factor/
.. _killMS: https://github.com/saopicc/killMS/
.. _LiLF: https://github.com/revoltek/LiLF
docs/source/initsub_high_image.png

9.13 MiB

docs/source/initsub_low_image.png

9.3 MiB

.. _initsubtract_pipeline:
Initial-subtract pipeline
=========================
This pipeline images the full FoV (and first side lobe) at two resolutions and at
multiple frequencies, generating a sky model and subtracting it from the
visibilities. This pipeline need only be run if you want to use Factor to do the
direction-dependent imaging. The parset is named one of ``Initial-Subtract.parset``,
``Initial-Subtract-IDG.parset``, or ``Initial-Subtract-IDG-LowMemory.parset``,
depending on whether one wants to use IDG with WSClean. IDG is generally much
faster than the normal WSClean if you have GPUs.
.. note::
At this time, only HBA data are supported.
Prepare data
------------
This part of the pipeline prepares the target data in order to be imaged. The steps are
as follows:
``create_ms_map``
Generate a mapfile of all the target data (the concatenated datasets output by the
target pipeline, with the direction-independent phase-only calibration applied).
``combine_mapfile``
Generate a mapfile with all files in a single entry. This mapfile is used as
input to the next step.
``do_magic``
Compute frequency groupings, image sizes, and averaging values using the MS
files from the previous step. The image size is calculated from the FWHM of the
primary beam at the lowest frequency at the mean elevation of the observation.
``do_magic_maps``
Convert the output of do_magic into usable mapfiles.
``create_h5parm_map``
Create a mapfile with the direction independent h5parm.
``expand_h5parm_mapfile``
Expand the h5parm mapfile so that there is one entry for every file.
``select_imaging_bands``
Select bands spread over the full bandwidth for imaging.
``select_high_size``
Adjust the high_size mapfile to match the selected bands.
``select_high_nwavelengths``
Adjust the nwavelengths mapfile to match the selected bands.
Imaging and subtraction
-----------------------
Imaging is done at two resolutions to fully cover the expected range of source structure.
WSClean is used to produce the images. See the parset and the do_magic step above
for details of the parameters used. They are chosen to produce good results for
most standard observations.
``wsclean_high``
Image the data with WSClean to make the high-resolution images. The images will
automatically be stretched along the y-axis to account for the elongation of the
primary beam as a function of average elevation. A typical image at
lower Declination (+7 degrees) looks like the one below.
.. image:: initsub_high_image.png
``mask_high``
Make masks for the high-res images. Masks are used to exclude artifacts from
being included in the subtract steps.
``mk_inspect_dir``
Create the inspection_directory if needed.
``copy_mask``
Copy the mask images to where we want them.
``plot_im_high``
Plot the high-res image and mask as png files. Such an image is show above.
``move_high``
Move the high-res images to where we want them.
``create_maxsize_high_map``
Make a mapfile with maximum image size.
``pad_model_high``
Pad the model images to a uniform size.
``pad_mask_high``
Pad the mask images to a uniform size.
``combine_model_high_mapfile``
Compress the model_high mapfile.
``expand_model_high``
Expand the model_high mapfile so that there is one entry for every band.
``combine_mask_high_mapfile``
Compress the mask_high mapfile.
``expand_mask_high``
Expand the mask high mapfile so that there is one entry for every band.
``fits_to_bbs_high``
Convert high-res model images to sky models that are understood by DPPP.
``make_sourcedb_high``
Make sourcedbs from the high-res sky models.
``expand_sourcedb_high``
Expand the sourcedb mapfile so that there is one entry for every file.
``subtract_high``
Predict, corrupt, and subtract the high-resolution model. The subtraction is
done from the DATA column to the SUBTRACTED_DATA_HIGH column. The SUBTRACTED_DATA_HIGH
column is imaged later in the ``wsclean_low`` step to pick up any emission missed in
the high-resolution image.
``select_low_size``
Adjust the low size mapfile to match the selected bands.
``select_low_nwavelengths``
Adjust the low nwavelengths mapfile to match the selected bands.
``wsclean_low``
Image the data (after subtraction of the high-resolution model) with WSClean
to make the low-resolution images. The images will automatically be
stretched along the y-axis to account for the elongation of the primary beam
as a function of average elevation. A typical image at lower Declination (+7
degrees) looks like the one below.
.. image:: initsub_low_image.png
``mask_low``
Make masks for the low-res images. Masks are used to exclude artifacts from
being included in the subtract steps.
``plot_im_low``
Plot the low-res image and mask as png files. Such an image is show above.
``move_low``
Move the low-res images to where we want them.
``create_maxsize_low_map``
Make a mapfile with maximum image size.
``pad_model_low``
Pad the model images to a uniform size.
``pad_mask_low``
Pad the mask images to a uniform size.
``combine_model_low_mapfile``
Compress the model_low mapfile.
``expand_model_low``
Expand the model_low mapfile so that there is one entry for every band.
``combine_mask_low_mapfile``
Compress the mask_low mapfile.
``expand_mask_low``
Expand the mask low mapfile so that there is one entry for every band.
``fits_to_bbs_low``
Convert low-res model images to sky models.
``make_sourcedb_low``
Make sourcedbs from the low-res sky models.
``expand_sourcedb_low``
Expand the sourcedb mapfile so that there is one entry for every file.
``subtract_low``
Predict, corrupt, and subtract the low-resolution model. The subtraction is
done from the SUBTRACTED_DATA_HIGH column to the SUBTRACTED_DATA_ALL column.
Therefore, the SUBTRACTED_DATA_ALL column contains the final residual data needed
for Factor.
``merge``
Merge the high-res and low-res sky models together. These sky models are used
by Factor to add sources back before calibration.
``copy_skymodels``
Copy the merged sky models to the directory with the input data.
``createmap_plots``
Create a map with the generated plots.
``move_plots``
Move the plots to the inpection directory.
User-defined parameter configuration
------------------------------------
**Parameters you will need to adjust**
*Information about the input data*
``! data_input_path``
Directory where your concatenated target data are stored.
``! data_input_pattern``
Regular expression pattern of all your target files.
.. note::
These files should have the direction-independent calibration applied to the DATA
column (usually the ``*.pre-cal.ms`` files from the target pipeline).
*Location of the software*
``! prefactor_directory``
Path to your prefactor copy
``! wsclean_executable``
Path to your local WSClean executable
**Parameters you may need to adjust**
*Imaging and subtraction options*
``! cellsize_highres_deg``
Cellsize in degrees for high-resolution images.
``! cellsize_lowres_deg``
Cellsize in degrees for low-resolution images.
``! fieldsize_highres``
Size of the high-resolution image is this value times the FWHM of mean semi-major axis of
the station beam.
``! fieldsize_lowres``
Size of the low-resolution image is this value times the FWHM of mean semi-major axis of
the station beam.
``! maxlambda_highres``
Maximum uv-distance in lambda that will be used for the high-resolution imaging.
``! maxlambda_lowres``
Maximum uv-distance in lambda that will be used for the low-resolution imaging.
``! image_padding``
How much padding shall we add during the imaging?
``! nbands_image``
Number of bands to image (spread over the full bandwidth). Larger values
result in better subtraction but longer runtimes.
``! min_flux_jy``
Minimum flux density in Jy of clean components from the high-resolution
imaging to include in subtract_high step.
``! idg_mode``
IDG mode to use: cpu or hybrid (= CPU + GPU).
``! local_scratch_dir``
Scratch directory for wsclean (can be local to the processing nodes!).
Parameters for **HBA** and **LBA** observations
-----------------------------------------------
At this time, only HBA data are supported.
.. _installation:
Downloading and installing prefactor
====================================
You can choose between the manual installation of all software required by **prefactor** or the use of pre-compiled Docker images.
Manual installation
--------------------------
For installing **prefactor** a Debian based operating system is recommended. In order to compile the required packages for running **prefactor** you may need to install the following packages::
$ apt-get update && \
$ apt-get install -y gfortran flex bison wcslib-dev libncurses5-dev casacore-data casacore-dev libboost-python-dev libcfitsio-dev python-dev \
python3-numpy libcasa* cmake build-essential liblua5.3-dev libhdf5-serial-dev libarmadillo-dev libboost-filesystem-dev libboost-system-dev \
libboost-date-time-dev libboost-numpy-dev libboost-signals-dev libboost-program-options-dev libboost-test-dev pybind11-dev libxml2-dev \
libpng-dev pkg-config libgtkmm-3.0-dev git wget libfftw3-dev libgsl-dev
In order to run **prefactor** you need to get the following packages onto your system::
$ apt-get install -y vim wget casacore-tools casacore-data wcslib-dev libarmadillo8 bison libncurses5 flex libboost-date-time1.65.1 libboost-filesystem1.65.1 \
libboost-numpy1.65.1 libboost-python1.65.1 libboost-program-options1.65.1 libboost-system1.65.1 libboost-signals1.65.1 \
libboost-test1.65.1 libboost-python1.65-dev libstationresponse3 liblua5.3-dev libcasa-* pybind11-dev libcfitsio5 libcfitsio-dev \
libgtkmm-3.0 libfftw3-3 libhdf5-cpp-100 libpng16-16 libxml2 python3.7 python3-casacore dysco python3-numpy python3-scipy
Then you need to install the following software:
* `LofarStMan`_
* `Dysco`_ (v2.1 or newer)
* `IDG`_ (v0.8 or newer)
* `aoflagger`_ (v3.1.0 or newer)
* `LOFARBeam`_ (v4.1.1 or newer)
* `EveryBeam`_ (v0.2.0 or newer)
* `DP3`_ (v5.1 or newer)
* `WSClean`_ (v2.10.1 or newer)
* `RMextract`_ (v0.4.2 or newer)
* `LoSoTo`_ (v2.2 or newer)
* `LSMTool`_ (v1.4.3 or newer)
To install the **prefactor** software package call::
$ pip3 install --upgrade pip && \
$ git clone https://github.com/lofar-astron/prefactor.git <prefactor_dir> && \
$ cd <prefactor_dir> && \
$ git checkout <prefactor_version> && \
$ git pull && \
$ pip3 install --upgrade $PWD && \
$ cd .. && \
$ rm -rfv <prefactor_dir>
and ::
$ python3 -m pip install matplotlib
whereas ``<prefactor-dir>`` is the name of the temporary prefactor source directory from where you build the package and ``<prefactor_version>`` the `version number`_ you aim to install.
Docker installation
--------------------------
There is no need to install all necessary software packages required for **prefactor** onto your system.
You can also make use of pre-compiled Docker images.
Instructions on how you can install Docker on your system can be found here:
* `CentOS`_
* `Debian`_
* `Fedora`_
* `Ubuntu`_
Getting prefactor pipeline description
------------------------------
The **prefactor** pipeline is described in the `Common Workflow Language`_ (CWL).
In order to retrieve the pipeline description call::
$ git clone https://git.astron.nl/eosc/prefactor3-cwl.git <install_dir>
To start the pipeline you need to install an interpreter for the CWL description files.
The most common ones are `cwltool`_ and `toil`_::
$ python3 -m pip install cwltool cwl-runner toil[cwl]
.. _toil: https://toil.readthedocs.io/en/latest/index.html
.. _cwltool: https://github.com/common-workflow-language/cwltool
.. _Common Workflow Language: https://www.commonwl.org/
.. _CentOS: https://docs.docker.com/engine/install/centos/
.. _Debian: https://docs.docker.com/engine/install/debian/
.. _Fedora: https://docs.docker.com/engine/install/fedora/
.. _Ubuntu: https://docs.docker.com/engine/install/ubuntu/
.. _version number: https://github.com/lofar-astron/prefactor/tags
.. _LSMTool: https://github.com/darafferty/LSMTool
.. _RMextract: https://github.com/lofar-astron/RMextract.git
.. _LoSoTo: https://github.com/revoltek/losoto.git
.. _WSClean: https://gitlab.com/aroffringa/wsclean.git
.. _DP3: https://git.astron.nl/RD/DP3.git
.. _EveryBeam: https://git.astron.nl/RD/EveryBeam.git
.. _LOFARBeam: https://github.com/lofar-astron/LOFARBeam.git
.. _aoflagger: https://gitlab.com/aroffringa/aoflagger.git
.. _IDG: https://git.astron.nl/RD/idg.git
.. _Dysco: https://github.com/aroffringa/dysco.git
.. _LofarStMan: https://github.com/lofar-astron/LofarStMan
.. _github page: https://github.com/lofar-astron/prefactor
.. _ASTRON gitlab page: https://git.astron.nl/eosc/prefactor3-cwl
docs/source/ion_ph-res_poldif.png

3.24 MiB

.. _parset:
Configuring prefactor
=====================
.. note::
If you are running the deprecated genericpipeline version of the pipeline (**prefactor** 3.2 or older), please check the :doc:`old instrunctions page<parset_old>`.
Preparing the configuration file
--------------------------------
The inputs for **prefactor** are provided in the `JSON format`_. The only required input is the input data to process (for the calibrator pipeline).
A minimum input file may look like this::
{
"msin": [
{"class": "Directory", "path": "3C286/L228161_SB000_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB001_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB002_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB003_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB004_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB005_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB006_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB007_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB008_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB009_uv.dppp.MS"}
],
}
There are more parameters you may want to adjust that can be added to this input JSON file. This is how an input looks like that uses all defaults of the HBA calibrator pipeline::
{
"msin": [
{"class": "Directory", "path": "3C286/L228161_SB000_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB001_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB002_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB003_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB004_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB005_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB006_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB007_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB008_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB009_uv.dppp.MS"}
],
"refant": "CS00.*",
"flag_baselines": [],
"process_baselines_cal": "*&",
"filter_baselines": "*&",
"fit_offset_PA": false,
"do_smooth": false,
"rfi_strategy": "HBAdefault.rfis",
"max2interpolate": 30,
"ampRange": [0,0],
"skip_international": true,
"raw_data": false,
"propagatesolutions": true,
"flagunconverged": false,
"maxStddev": -1.0,
"solutions2transfer": null,
"antennas2transfer": "[FUSPID].*",
"do_transfer": false,
"trusted_sources": "3C48,3C147,3C196,3C295,3C380",
"demix_sources": ["CasA","CygA"],
"demix_target": "",
"demix_freqstep": 16,
"demix_timestep": 10,
"demix": false,
"ion_3rd": false,
"clock_smooth": true,
"tables2export": "clock",
"max_dppp_threads": 10,
"memoryperc": 20,
"min_length": 50,
"overhead": 0.8,
"min_separation": 30,
"max_separation_arcmin": 1.0,
"calibrator_path_skymodel": null,
"A-Team_skymodel": null,
"avg_timeresolution": 4,
"avg_freqresolution": "48.82kHz",
"bandpass_freqresolution": "195.3125kHz"
}
If you just want to alter one of the defaults it is sufficient to override it by specifing its new value the JSON input file::
{
"msin": [
{"class": "Directory", "path": "3C286/L228161_SB000_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB001_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB002_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB003_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB004_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB005_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB006_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB007_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB008_uv.dppp.MS"},
{"class": "Directory", "path": "3C286/L228161_SB009_uv.dppp.MS"}
],
"demix": true,
}
If you run the target pipeline you also need to provide the calibrator solution set::
{
"msin": [
{"class": "Directory", "path": "?????/L228163_SB000_uv.dppp.MS"},
{"class": "Directory", "path": "?????/L228163_SB001_uv.dppp.MS"},
{"class": "Directory", "path": "?????/L228163_SB002_uv.dppp.MS"},
{"class": "Directory", "path": "?????/L228163_SB003_uv.dppp.MS"},
{"class": "Directory", "path": "?????/L228163_SB004_uv.dppp.MS"},
{"class": "Directory", "path": "?????/L228163_SB005_uv.dppp.MS"},
{"class": "Directory", "path": "?????/L228163_SB006_uv.dppp.MS"},
{"class": "Directory", "path": "?????/L228163_SB007_uv.dppp.MS"},
{"class": "Directory", "path": "?????/L228163_SB008_uv.dppp.MS"},
{"class": "Directory", "path": "?????/L228163_SB009_uv.dppp.MS"}
],
"cal_solutions": {"class": "File", "path": "results/cal_values/cal_solutions.h5"}
}
A detailed description of the input parameters can be found in the :doc:`calibrator` and :doc:`target` section.
.. _JSON format: https://www.json.org/json-en.html
.. _parset_old:
Configuring prefactor
=====================
.. note::
These instructions are outdated and only valid for **prefactor** 3.2 or older. Please check the :doc:`recent instrunctions page<parset>`.
Preparing the configuration file
--------------------------------
To set up the genericpipeline for prefactor you need to customize the ``pipeline.cfg`` configuration file:
- Copy ``$LOFARROOT/share/pipeline/pipeline.cfg`` to someplace in your ``$HOME`` and open it in an editor.
- It starts with a section ``[DEFAULT]``, in there you need to edit three entries:
- ``runtime_directory``: This is the directory where the pipeline puts
logfiles, parsets, the status of successful steps etc. This can be set to a
directory in your ``$HOME``, but it is recommended to set this to the same
value as the `working_directory`
- ``working_directory``: This is the directory where the processed data of the
intermediate and final steps is put. Set this to a directory on your data
disk, e.g. ``/data/scratch/<username>/PipelineExample``
- ``recipe_directories``: This is a list of directories where the pipeline
searches for recipes and plugins. There should already be an entry there, but
another needs to be added so that the plugin scripts of prefactor are found.
You need to add the pre-facet calibration directory to this list (so that the
``plugins`` directory is a subdirectory of one of the
``recipe\_directories``). E.g.:
``recipe_directories = [%(pythonpath)s/lofarpipe/recipes,/home/<username>/software/prefactor]``.
- There may be empty entries in the ``[DEFAULT]`` section of the
``pipeline.cfg`` file. This was set during the installation of the LOFAR
software and usually is no need to worry.
- In case you do not run it on a cluster, you need to tell the pipeline to start the processes on the local machine:
- Search for the section ``[cluster]``, set the entry clusterdesc to ``(lofarroot)s/share/local.clusterdesc``.
- Add a section ``[remote]`` by adding the following lines::
[remote]
method = local
max_per_node = <NumCPUCores>
If there is already another section ``[remote]``, then remove that.
- The pipeline framework contains a number of parallel processing schemes for
working on multi-node clusters. Ask you local sysadmin for advice.
Preparing the pipeline parset
-----------------------------
The pipeline parsets available in the prefactor directory (those files ending
with ``.parset``) are templates of genericpipeline parsets and need some small
editing before they can be run. To avoid confusion you should make a copy of the
parset that you want to run and give it a descriptive name. All parameters that
need to be changed are defined at the top of the parset with lines beginning
with ``!``. See the comments in the pipeline parsets and the notes below for some
hints on setting these parameters.
Some parameters depend on the observation to be processed and need to be
modified for each new observation, others are more "machine-dependent" so they
are the same for different observations that are processed on the same
machine(s).
See :ref:`pipeline_overview_old` for an overview of the pipeline parsets, and their
respective pages for a more in-depth description. Below are some general guidelines
for preparing the parsets:
- Don't edit the original parset files directly. Make a copy with a descriptive
name (e.g. ``Pre-Facet-Cal-calibrator-3c295.parset``) and edit that copy.
- The pipeline framework will use the filename of the pipeline parset as the
job-name if the latter is not explicitly given. That way there is no need to
change the ``runtime_directory`` and ``working_directory`` entries in the
``pipeline.cfg`` for different pipeline runs. The pipeline framework will
generate sub-directories with the job-name in there.
- The ``reference_station`` should be a station that is present in the
observation, and didn't do anything strange. It is worth checking the plots and
possibly changing the reference station if problems are found.
- The ``num_proc_per_node`` parameter is the number of processes of "small"
programs (with little memory usage and which are not multi-threaded) that are
run in parallel on one node.
- The ``num_proc_per_node_limit`` parameter is the number of processes of "big"
programs (with large memory usage and/or multi-threaded programs) that are run
in parallel on one node.
- In addition to setting how many DPPP processes (DPPP is a "big" program) are
run in parallel, you can set how many threads each DPPP process may use with the
``max_dppp_threads`` parameter.
- So both: ``num_proc_per_node`` and (``num_proc_per_node_limit`` *
``max_dppp_threads``) should be equal or smaller than the number of cores that
you have on your processing computers.
- Similarly (``max_imagers_per_node`` * ``max_percent_mem_per_img``) should be
less that 100% and (``max_imagers_per_node`` * ``max_cpus_per_img``) should be
equal or smaller than the number of cores.
- If your pipeline runs out of memory, then you can also lower these parameters
to make the pipeline use less memory.
- Most of the actual processing is now done in DPPP, so the parameters that
control its behavior are the important ones.
docs/source/ph_freq.png

1.21 MiB

docs/source/ph_polXX.png

645 KiB

docs/source/ph_poldif.png

708 KiB

docs/source/ph_poldif_freq.png

436 KiB

.. _pipeline_overview:
Pipeline overview
=================
.. note::
If you are running the deprecated genericpipeline version of the pipeline (**prefactor** 3.2 or older), please check the :doc:`old instrunctions page<pipelineoverview_old>`.
**Prefactor** is organized in three major parts to process **LOFAR** data:
.. image:: prefactor_CWL_workflow_sketch.png
``prefactor_calibrator``
Processes the (amplitude-)calibrator to derive direction-independent corrections. See :ref:`calibrator_pipeline` for details.
``prefactor_target``
Transfers the direction-independent corrections to the target and does direction-independent calibration of the target. See :ref:`target_pipeline` for details.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment