Commit 74799526 authored by Lauri Himanen's avatar Lauri Himanen
Browse files

Fixed linting, trying to solve submodule mess.

parents d8aa3770 9492c38d
Pipeline #70560 failed with stages
in 31 minutes and 41 seconds
Subproject commit d918460c31728058834432b736062d44e1e1c074
Subproject commit f1c85ccdb381094b9e46382c66e260d9ed5d641c
Subproject commit 15d0110cbeda05aaea05e4d30ba3aeb0874dafef
Subproject commit 022a2af6bad45364dbdfac6b6c913f04186ac7d4
Command Line Interface (CLI)
----------------------------
The :code:`nomad` python package comes with a command line interface (CLI) that
can be accessed after installation by simply running the :code:`nomad` command
in your terminal. The CLI provides a hiearchy of commands by using the `click
package <https://click.palletsprojects.com/>`_.
This documentation describes how the CLI can be used to manage a NOMAD
installation. For commmon use cases see :ref:`cli_use_cases`. For a full
reference of the CLI commands see :ref:`cli_ref`.
.. toctree::
:maxdepth: 2
cli_use_cases.rst
cli_ref.rst
.. _cli_ref:
CLI Reference
*************
Client CLI commands
""""""""""""""""""""""""""""""""""""""""
.. click:: nomad.cli.client.client:client
:prog: nomad client
:show-nested:
Admin CLI commands
""""""""""""""""""""""""""""""""""""""""
.. click:: nomad.cli.admin.admin:admin
:prog: nomad admin
:show-nested:
.. _cli_use_cases:
Use cases
*********
Mirroring data between production environments
""""""""""""""""""""""""""""""""""""""""""""""
Sometimes you would wish to transfer data between separate deployments of the
NOMAD infrastructure. This use case covers the situation when the deployments
are up and running and both have access to the underlying file storage, part of
which is mounted inside each container under :code:`.volumes/fs`.
With both the source and target deployment running, you can use the
:code::ref:`cli_ref:mirror` command to transfer the data from source to target. The
mirror will copy everything: i.e. the raw data, archive data and associated
metadata in the database.
The data to be mirrored is specified by using a query API path. For example to
mirror the upload from source deployment to target deployment, you would use
the following CLI command inside the target deployment:
.. code-block:: sh
nomad client -n <api_url> -u <username> -w <password> mirror <query_json> --source-mapping .volumes/fs:nomad/fairdi/prod/fs
Here is a breakdown of the different arguments:
* :code:`-n <url>`: Url to the API endpoint in the source deployment. This API will
be queried to fetch the data to be mirrored. E.g.
http://repository.nomad-coe.eu/api
* :code:`-u <username>`: Your username that is used for authentication in the API call.
* :code:`-w <password>`: Your password that is used for authentication in the API call.
* :code:`mirror <query>`: Your query as a JSON dictionary. See the documentation for
available keywords. E.g. "{"upload_id: "<upload_id>"}"
* :code:`--source-mapping <mapping>`: The deployments use a separate folder to store
the archive and raw data. To correctly find the data that should be
mirrored, the absolute path on the filesystem that is shared between the
deployments needs to be provided. E.g. *.volumes/fs:nomad/fairdi/prod/fs*.
The first part of this mapping indicates a docker volume path
(*.volumes/fs* in this example) that should be mapped to the second
filepath on the shared filesystem (*nomad/fairdi/prod/fs* in this example).
Updating the AFLOW prototype information
""""""""""""""""""""""""""""""""""""""""
NOMAD uses the `AFLOW prototype library
<http://www.aflowlib.org/CrystalDatabase/>`_ to link bulk crystal entries with
prototypical structures based on their symmetry. The
:ref:`cli_ref:prototypes-update` subcommand can be used to update this
database from the online information provided by AFLOW. The command produces a
prototype dataset as a python module.
The dataset should be recreated if the AFLOW dataset has been updated or if the
symmetry matching routine used within NOMAD is updated (e.g. the symmetry
tolerance is modified). To produce a new dataset run the following command:
.. code-block:: sh
nomad admin ops prototypes-update <module_path>
Here is a breakdown of the different arguments:
* :code:`<module_name>`: Name of the python module in which the data should
be stored. If the file does not exist it will be created. The prototype
data used by NOMAD is under the path:
*nomad/normalizing/data/aflow_prototypes.py*
The command also provides a :code:`--matches-only` flag for only updating the
dataset entry that is used for matching the prototypes. This means that the
online information from AFLOW is not queried. This makes the process faster
e.g. in the case when you want only to update the matches after modifying the
symmetry routines.
......@@ -45,6 +45,8 @@ extensions = [
'sphinx.ext.coverage',
'sphinx.ext.ifconfig',
'sphinx.ext.napoleon',
'sphinx.ext.autosectionlabel',
'sphinx_click.ext',
'sphinxcontrib.httpdomain',
'sphinxcontrib.autohttp.flask',
'sphinxcontrib.autohttp.flaskqref',
......@@ -52,6 +54,9 @@ extensions = [
'm2r'
]
# Prefix the automatically generated labels with the document name
autosectionlabel_prefix_document = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['.templates']
......
.. mdinclude:: ../ops/docker-compose/nomad/README.md
.. mdinclude:: ../ops/helm/nomad/README.md
.. mdinclude:: ../ops/containers/README.md
.. mdinclude:: ../ops/docker-compose/nomad-oasis/README.md
Operating NOMAD
===============
###############
.. mdinclude:: ../ops/README.md
.. mdinclude:: ../ops/docker-compose/nomad/README.md
.. mdinclude:: ../ops/helm/nomad/README.md
.. mdinclude:: ../ops/containers/README.md
.. mdinclude:: ../ops/docker-compose/nomad-oasis/README.md
.. toctree::
:maxdepth: 2
depl_docker
depl_helm
depl_images
cli
oasis
## Overview
Read the [introduction](./introduction.html) and [setup](./setup.html) for input on
the different nomad services. This is about how to deploy and operate these services.
......
......@@ -519,6 +519,7 @@ def test_band_structure(bands_unpolarized_no_gap, bands_polarized_no_gap, bands_
assert gap_up_ev == pytest.approx(0.956, 0.01)
assert gap_down_ev == pytest.approx(1.230, 0.01)
def test_hashes_exciting(hash_exciting):
"""Tests that the hashes has been successfully calculated for calculations
from exciting.
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment