diff --git a/README.md b/README.md index a0e3909203c7efb7a485ea98e20fa1880c333ff0..f43b48ef64558758702d5fa6fee8d304c22f076d 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ This project implements the new *nomad@FAIRDI* infrastructure. Contrary to its N predecessor, it implements the NOMAD Repository and NOMAD Archive functionality within a single cohesive application. This project provides all necessary artifacts to develop, test, deploy, and operate the NOMAD Respository and Archive, e.g. at -[https://repository.nomad-coe.eu/app/gui](https://repository.nomad-coe.eu/app/gui). +[https://nomad-lab.eu](https://nomad-lab.eu). In the future, this project's aim is to integrate more NOMAD CoE components, like the NOMAD Encyclopedia and NOMAD Analytics Toolkit, to fully integrate NOMAD with one GUI and consistent @@ -37,7 +37,7 @@ nomad parse --show-backend ### For NOMAD developer -Read the [docs](https://repository.nomad-coe.eu/app/docs/index.html). The documentation is also part +Read the [docs](https://nomad-lab.eu/prod/rae/docs/index.html). The documentation is also part of the source code. It covers aspects like introduction, architecture, development setup/deployment, contributing, and API reference. diff --git a/docs/api.rst b/docs/api.rst index 97332aaf4c86c6aa6538eb2c8f5e1d4fbbd52a6b..a74fa951ce260da76814ef56648ecaa9c340393a 100644 --- a/docs/api.rst +++ b/docs/api.rst @@ -4,8 +4,8 @@ API Reference This is just a brief summary of all API endpoints of the NOMAD API. For a more compelling documention consult our *swagger* dashboards: -- (NOMAD API)[swagger dashboard](https://repository.nomad-coe.eu/app/api/) -- (NOMAD's Optimade API)[swagger dashboard](https://repository.nomad-coe.eu/app/optimade/) +- (NOMAD API)[swagger dashboard](https://nomad-lab.eu/prod/rae/api/) +- (NOMAD's Optimade API)[swagger dashboard](https://nomad-lab.eu/prod/rae/optimade/) Summary diff --git a/docs/api_tutorial.md b/docs/api_tutorial.md index 999ec1385a4d488d9e3d5b9cc6d8769687f9e83c..d4bd1abd200225062df29d54a044edcbab7d687f 100644 --- a/docs/api_tutorial.md +++ b/docs/api_tutorial.md @@ -12,7 +12,7 @@ trade-offs between expressiveness, learning curve, and convinience: - use a generic Python HTTP library like [requests](https://requests.readthedocs.io/en/master/) - use more specific Python libraries like [bravado](https://github.com/Yelp/bravado) that turn HTTP requests into NOMAD specific function calls based on an [OpenAPI spec](https://swagger.io/specification/) that NOMAD offers and that describes our API -- directly in the browser via our generated [swagger dashboard](https://repository.nomad-coe.eu/app/api/) +- directly in the browser via our generated [swagger dashboard](../api/) - use the NOMAD Python client library, which offers custom and more powerful implementations for certain tasks (currently only for accessing the NOMAD Archive) @@ -39,7 +39,7 @@ our gui) for entries that fit search criteria, like compounds having atoms *Si* it: ``` -curl -X GET "http://repository.nomad-coe.eu/app/api/repo/?atoms=Si&atoms=O" +curl -X GET "http://nomad-lab.eu/prod/rae/api/repo/?atoms=Si&atoms=O" ``` Here we used curl to send an HTTP GET request to return the resource located by the given URL. @@ -47,20 +47,20 @@ In practice you can omit the `-X GET` (which is the default) and you might want the output: ``` -curl "http://repository.nomad-coe.eu/app/api/repo/?atoms=Si&atoms=O" | python -m json.tool +curl "http://nomad-lab.eu/prod/rae/api/repo/?atoms=Si&atoms=O" | python -m json.tool ``` You'll see the the metadata of the first 10 entries that match your criteria. There are various other query parameters. You find a full list in the generated [swagger dashboard -of our API](https://repository.nomad-coe.eu/app/api/). +of our API](https://nomad-lab.eu/prod/rae/api/). Besides search criteria you can determine how many results (`per_page`) and what page of results should be returned (`page`). If you want to go beyond the first 10.000 results you can use our *scroll* API (`scroll=true`, `scroll_after`). You can limit what properties should be returned (`include`, `exclude`). See the the generated [swagger dashboard -of our API](https://repository.nomad-coe.eu/app/api/) for more parameters. +of our API](https://nomad-lab.eu/prod/rae/api/) for more parameters. -If you use the [NOMAD Repository and Archive search interface](https://repository.nomad-coe.eu/app/gui/search) +If you use the [NOMAD Repository and Archive search interface](https://nomad-lab.eu/prod/rae/gui/search) and create a query, you can click th a **<>**-button (right and on top of the result list). This will give you some code examples with URLs for your search query. @@ -69,21 +69,21 @@ identified an entry (given via a `upload_id`/`calc_id`, see the query output), a you want to download it: ``` -curl "http://repository.nomad-coe.eu/app/api/raw/calc/JvdvikbhQp673R4ucwQgiA/k-ckeQ73sflE6GDA80L132VCWp1z/*" -o download.zip +curl "http://nomad-lab.eu/prod/rae/api/raw/calc/JvdvikbhQp673R4ucwQgiA/k-ckeQ73sflE6GDA80L132VCWp1z/*" -o download.zip ``` With `*` you basically requests all the files under an entry or path.. If you need a specific file (that you already know) of that calculation: ``` -curl "http://repository.nomad-coe.eu/app/api/raw/calc/JvdvikbhQp673R4ucwQgiA/k-ckeQ73sflE6GDA80L132VCWp1z/INFO.OUT" +curl "http://nomad-lab.eu/prod/rae/api/raw/calc/JvdvikbhQp673R4ucwQgiA/k-ckeQ73sflE6GDA80L132VCWp1z/INFO.OUT" ``` You can also download a specific file from the upload (given a `upload_id`), if you know the path of that file: ``` -curl "http://repository.nomad-coe.eu/app/api/raw/JvdvikbhQp673R4ucwQgiA/exciting_basis_set_error_study/monomers_expanded_k8_rgkmax_080_PBE/72_Hf/INFO.OUT" +curl "http://nomad-lab.eu/prod/rae/api/raw/JvdvikbhQp673R4ucwQgiA/exciting_basis_set_error_study/monomers_expanded_k8_rgkmax_080_PBE/72_Hf/INFO.OUT" ``` If you have a query @@ -91,19 +91,19 @@ that is more selective, you can also download all results. Here all compounds th consist of Si, O, bulk material simulations of cubic systems (currently ~100 entries): ``` -curl "http://repository.nomad-coe.eu/app/api/raw/query?only_atoms=Si&only_atoms=O&system=bulk&crystal_system=cubic" -o download.zip +curl "http://nomad-lab.eu/prod/rae/api/raw/query?only_atoms=Si&only_atoms=O&system=bulk&crystal_system=cubic" -o download.zip ``` In a similar way you can see the archive of an entry: ``` -curl "http://repository.nomad-coe.eu/app/api/archive/f0KQE2aiSz2KRE47QtoZtw/6xe9fZ9xoxBYZOq5lTt8JMgPa3gX" | python -m json.tool +curl "http://nomad-lab.eu/prod/rae/api/archive/f0KQE2aiSz2KRE47QtoZtw/6xe9fZ9xoxBYZOq5lTt8JMgPa3gX" | python -m json.tool ``` Or query and display the first page of 10 archives: ``` -curl "http://repository.nomad-coe.eu/app/api/archive/query?only_atoms=Si&only_atoms=O" | python -m json.tool +curl "http://nomad-lab.eu/prod/rae/api/archive/query?only_atoms=Si&only_atoms=O" | python -m json.tool ``` ## Using Python's *request* library @@ -115,7 +115,7 @@ client library that allows you to send requests: import requests import json -response = requests.get("http://repository.nomad-coe.eu/app/api/archive/query?only_atoms=Si&only_atoms=O") +response = requests.get("http://nomad-lab.eu/prod/rae/api/archive/query?only_atoms=Si&only_atoms=O") data = response.json() print(json.dumps(data), indent=2) ``` @@ -128,7 +128,7 @@ specific functions for you. ```python from bravado.client import SwaggerClient -nomad_url = 'http://repository.nomad-coe.eu/app/api' +nomad_url = 'http://nomad-lab.eu/prod/rae/api' # create the bravado client client = SwaggerClient.from_url('%s/swagger.json' % nomad_url) @@ -194,7 +194,7 @@ data you also need an account (email, password). The toy account used here, shou available on most nomad installations: ```python -nomad_url = 'https://labdev-nomad.esc.rzg.mpg.de/fairdi/nomad/latest/api' +nomad_url = 'https://nomad-lab.eu/prod/rae/api' user = 'leonard.hofstadter@nomad-fairdi.tests.de' password = 'password' ``` @@ -220,7 +220,7 @@ class KeycloakAuthenticator(Authenticator): self.password = password self.token = None self.__oidc = KeycloakOpenID( - server_url='https://repository.nomad-coe.eu/fairdi/keycloak/auth/', + server_url='https://nomad-lab.eu/fairdi/keycloak/auth/', realm_name='fairdi_nomad_prod', client_id='nomad_public') @@ -296,7 +296,7 @@ if upload.tasks_status != 'SUCCESS': ``` Of course, you can also visit the nomad GUI -([https://labdev-nomad.esc.rzg.mpg.de/fairdi/nomad/latest/gui/uploads](https://labdev-nomad.esc.rzg.mpg.de/fairdi/nomad/latest/gui/uploads)) +([https://nomad-lab.eu/prod/rae/gui/uploads](https://nomad-lab.eu/prod/rae/gui/uploads)) to inspect your uploads. (You might click reload, if you had the page already open.) @@ -379,7 +379,7 @@ or downloading data are only **GET** operations controlled by URL parameters. Fo Downloading data: ``` -curl http://repository.nomad-coe.eu/app/api/raw/query?upload_id= -o download.zip +curl http://nomad-lab.eu/prod/rae/api/raw/query?upload_id= -o download.zip ``` It is a litle bit trickier, if you need to authenticate yourself, e.g. to download @@ -387,18 +387,18 @@ not yet published or embargoed data. All endpoints support and most require the an access token. To acquire an access token from our usermanagement system with curl: ``` curl --data 'grant_type=password&client_id=nomad_public&username=&password=' \ - https://repository.nomad-coe.eu/fairdi/keycloak/auth/realms/fairdi_nomad_prod/protocol/openid-connect/token + https://nomad-lab.eu/fairdi/keycloak/auth/realms/fairdi_nomad_prod/protocol/openid-connect/token ``` You can use the access-token with: ``` curl -H 'Authorization: Bearer ' \ - http://repository.nomad-coe.eu/app/api/raw/query?upload_id= -o download.zip + http://nomad-lab.eu/prod/rae/api/raw/query?upload_id= -o download.zip ``` ### Conclusions This was just a small glimpse into the nomad API. You should checkout our -[swagger-ui](https://repository.nomad-coe.eu/app/api/) +[swagger-ui](nomad-lab.eu/prod/rae/api/) for more details on all the API endpoints and their parameters. You can explore the API via the swagger-ui and even try it in your browser. diff --git a/docs/client/cli_use_cases.rst b/docs/client/cli_use_cases.rst index cc55cfbaf17e48a5a36a830621d225f6a5cce923..871d74c8886e039647e94103340e600347f8e930 100644 --- a/docs/client/cli_use_cases.rst +++ b/docs/client/cli_use_cases.rst @@ -27,7 +27,7 @@ Here is a breakdown of the different arguments: * :code:`-n `: Url to the API endpoint in the source deployment. This API will be queried to fetch the data to be mirrored. E.g. - http://repository.nomad-coe.eu/api + http://nomad-lab.eu/prod/rae/api * :code:`-u `: Your username that is used for authentication in the API call. * :code:`-w `: Your password that is used for authentication in the API call. * :code:`mirror `: Your query as a JSON dictionary. See the documentation for diff --git a/docs/client/install.rst b/docs/client/install.rst index d4eb101e08db5eb335b60d138e02124d9042687d..14957a2304e958c6b97e758024a1b0c1008c810d 100644 --- a/docs/client/install.rst +++ b/docs/client/install.rst @@ -14,7 +14,7 @@ Download and install latest release from nomad .. code-block:: sh - curl https://repository.nomad-coe.eu/app/dist/nomad-lab.tar.gz -o nomad-lab.tar.gz + curl https://nomad-lab.eu/prod/rae/dist/nomad-lab.tar.gz -o nomad-lab.tar.gz pip install ./nomad-lab.tar.gz There are different layers of dependencies that you have to install, in order to use diff --git a/docs/conf.py b/docs/conf.py index 7ce7e127d618ae728cb464e94c2a39a1ef74f773..67f30366c3fd62e4370caa52b7730884376a502b 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -200,4 +200,4 @@ def setup(app): # }, True) # app.add_transform(AutoStructify) -extlinks = {'api': ('https://repository.nomad-coe.eu/app/api/%s', 'NOMAD API ')} \ No newline at end of file +extlinks = {'api': ('https://nomad-lab.eu/prod/rae/api/%s', 'NOMAD API ')} \ No newline at end of file diff --git a/docs/introduction.md b/docs/introduction.md index 9babf7594662add2b3638595519f502acf3b0cdc..0783f4220d595b80f08cdbaa04d02bd55d9f07c8 100644 --- a/docs/introduction.md +++ b/docs/introduction.md @@ -14,9 +14,10 @@ This is the documentation of **nomad@FAIRDI**, the Open-Source continuation of t original NOMAD-coe software that reconciles the original code base, integrate it's services, allows 3rd parties to run individual and federated instance of the nomad infrastructure, provides nomad to other material science domains, and applies -the FAIRDI principles as proliferated by the [FAIRDI Data Infrastructure e.V.](http://fairdi.eu). +the FAIRDI principles as proliferated by the [FAIRDI Data Infrastructure e.V.](https://fairdi.eu). A central and publically available instance of the nomad software is run at the -[MPCDF](https://www.mpcdf.mpg.de/) in Garching, Germany. +[MPCDF](https://www.mpcdf.mpg.de/) in Garching, Germany. Software development and the +operation of NOMAD is done by the [NOMAD Laboratory](https://nomad-lab.eu) The nomad software runs SAAS on a server and is used via web-based GUI and ReSTful API. Originally developed and hosted as individual services, **nomad@FAIRDI** @@ -24,8 +25,7 @@ provides all services behind one GUI and API into a single coherent, integrated, modular software project. This documentation is only about the nomad *software*; it is about architecture, -how to contribute, code reference, engineering and operation of nomad. It is not a -nomad user manual. +how to contribute, code reference, engineering and operation of nomad. ## Architecture diff --git a/docs/metainfo.rst b/docs/metainfo.rst index fdd4d5afd31ae32ec6a946f73826be9b23e83dbd..4cc884d3723de14a3d71ebe6c858fadf91af2f6c 100644 --- a/docs/metainfo.rst +++ b/docs/metainfo.rst @@ -10,7 +10,7 @@ The NOMAD Metainfo stores descriptive and structured information about materials data contained in the NOMAD Archive. The Metainfo can be understood as the schema of the Archive. The NOMAD Archive data is structured to be independent of the electronic-structure theory code or molecular-simulation, -(or beyond). The NOMAD Metainfo can be browsed as part of the `NOMAD Repository and Archive web application `_. +(or beyond). The NOMAD Metainfo can be browsed as part of the `NOMAD Repository and Archive web application `_. Typically (meta-)data definitions are generated only for a predesigned and specific scientific field, application or code. In contrast, the NOMAD Metainfo considers all pertinent information @@ -29,10 +29,8 @@ the archive data is solely served by NOMAD's API. The NOMAD Metainfo started within the `NOMAD Laboratory `_. It was discussed at the `CECAM workshop Towards a Common Format for Computational Materials Science Data `_ -and is open to external contributions and extensions. More information can be found in: - -- `Towards a Common Format for Computational Materials Science Data (Psi-K 2016 Highlight) `_ provides a description on how to establish code-independent formats in detail and presents the challenges and practical strategies for achieving a common format for the representation of computational material-science data. -- `The Novel Materials Discovery Laboratory - Data formats and compression, D1.1 `_ outlines possible data formats, concepts, and compression techniques used to build a homogeneous (code-independent) data archive, called the NOMA +and is open to external contributions and extensions. More information can be found in +`Towards a Common Format for Computational Materials Science Data (Psi-K 2016 Highlight) `_. Metainfo Python Interface diff --git a/docs/upload.rst b/docs/upload.rst index 6c98dd3582d1f753654d1e1d8ec309e9155470aa..abc9cb18f05348eb65435def29efa41650b99476 100644 --- a/docs/upload.rst +++ b/docs/upload.rst @@ -2,13 +2,14 @@ Uploading Data to the NOMAD Repository ====================================== -To contribute your data to the repository, please, login to our `upload page <../uploads>`_ (you need to register first, if you do not have a NOMAD account yet). +To contribute your data to the repository, please, login to our `upload page <../gui/uploads>`_ +(you need to register first, if you do not have a NOMAD account yet). *A note for returning NOMAD users!* We revised the upload process with browser based upload alongside new shell commands. The new Upload page allows you to monitor upload processing and verify processing results before publishing your data to the Repository. -The `upload page <../uploads>`_ acts as a staging area for your data. It allows you to +The `upload page <../gui/uploads>`_ acts as a staging area for your data. It allows you to upload data, to supervise the processing of your data, and to examine all metadata that NOMAD extracts from your uploads. The data on the upload page will be private and can be deleted again. If you are satisfied with our processing, you can publish the data. @@ -29,8 +30,9 @@ extract the most important information of POTCAR files and store it in the files POTCAR files are only available to the uploader and assigned co-authors. This is done automatically; you don't need to do anything. -Once published, data cannot be erased. Linking a corrected version to a corresponding older one ("erratum") will be possible soon. -Files from an improved calculation, even for the same material, will be handled as a new entry. +Once published, data cannot be erased. Linking a corrected version to a corresponding older +one ("erratum") will be possible soon. Files from an improved calculation, even for the +same material, will be handled as a new entry. You can publish data as being open access or restricted for up to three years (with embargo). For the latter you may choose with whom you want to share your data. We strongly support the diff --git a/examples/api_use.py b/examples/api_use.py index ef0bb277405c809739ab1c242b327f7266aa28ea..9b345d0f5262fd49456437589e993e1924c31962 100644 --- a/examples/api_use.py +++ b/examples/api_use.py @@ -4,7 +4,7 @@ This is a brief example on how to use the public nomad@FAIRDI API. from bravado.client import SwaggerClient -nomad_url = 'http://repository.nomad-coe.eu/app/api' +nomad_url = 'http://nomad-lab.eu/prod/rae/api' # create the bravado client client = SwaggerClient.from_url('%s/swagger.json' % nomad_url) diff --git a/examples/api_use_authenticated.py b/examples/api_use_authenticated.py index 297cf120128307b9ba24e35ba555254f91119d67..5bb9d2394e5043e47c0c606fc47d986a5ce5210d 100644 --- a/examples/api_use_authenticated.py +++ b/examples/api_use_authenticated.py @@ -8,7 +8,7 @@ from urllib.parse import urlparse from keycloak import KeycloakOpenID from time import time -nomad_url = 'http://repository.nomad-coe.eu/app/api' +nomad_url = 'http://nomad-lab.eu/prod/rae/api' user = 'yourusername' password = 'yourpassword' @@ -21,7 +21,7 @@ class KeycloakAuthenticator(Authenticator): self.password = password self.token = None self.__oidc = KeycloakOpenID( - server_url='https://repository.nomad-coe.eu/fairdi/keycloak/auth/', + server_url='https://nomad-lab.eu/fairdi/keycloak/auth/', realm_name='fairdi_nomad_prod', client_id='nomad_public') diff --git a/gui/package.json b/gui/package.json index 4465c0d8d9a9f356b578cc473348822e8745d28b..5a1c854cfed30257c43aa62e9afbd2cb27334882 100644 --- a/gui/package.json +++ b/gui/package.json @@ -1,6 +1,6 @@ { "name": "nomad-fair-gui", - "version": "0.8.2", + "version": "0.8.3", "commit": "e98694e", "private": true, "dependencies": { diff --git a/gui/public/env.js b/gui/public/env.js index 741fa010e593148c6e1f9c9f55c4e03a907972bf..1bc990b0f70bfe2576e51e54ec9a9fa349ff2719 100644 --- a/gui/public/env.js +++ b/gui/public/env.js @@ -1,16 +1,16 @@ window.nomadEnv = { - 'keycloakBase': 'https://repository.nomad-coe.eu/fairdi/keycloak/auth/', + 'keycloakBase': 'https://nomad-lab.eu/fairdi/keycloak/auth/', 'keycloakRealm': 'fairdi_nomad_test', 'keycloakClientId': 'nomad_gui_dev', 'appBase': 'http://localhost:8000/fairdi/nomad/latest', 'debug': false, - 'matomoEnabled': true, - 'matomoUrl': 'https://repository.nomad-coe.eu/fairdi/stat', + 'matomoEnabled': false, + 'matomoUrl': 'https://nomad-lab.eu/prod/stat', 'matomoSiteId': '2', 'version': { - "label": "0.8.2", + "label": "0.8.3", "isBeta": false, "usesBetaData": false, - "officialUrl": "https://repository.nomad-coe.eu/app/gui" + "officialUrl": "https://nomad-lab.eu/prod/rae/gui" } } diff --git a/gui/src/components/About.js b/gui/src/components/About.js index ff4fecf3f83add55c3372b3cfb9adaaba54474c9..e5cd720a90c755eb40e281a6cb3649585021939d 100644 --- a/gui/src/components/About.js +++ b/gui/src/components/About.js @@ -100,7 +100,7 @@ export default function About() { window.location.href = 'https://encyclopedia.nomad-coe.eu/gui/#/search' }) makeClickable('analytics', () => { - window.location.href = 'https://www.nomad-coe.eu/index.php?page=bigdata-analyticstoolkit' + window.location.href = 'https://nomad-lab.eu/index.php?page=AItutorials' }) makeClickable('search', () => { history.push('/search') @@ -144,10 +144,14 @@ export default function About() { This is the *graphical user interface* (GUI) for the NOMAD Repository and Archive. It allows you to **search, access, and download all NOMAD data** in its raw (Repository) and processed (Archive) form. You can **upload and manage your own - raw materials science data**. Learn more about what data can be uploaded - and how to prepare your data on the [NOMAD Repository homepage](https://repository.nomad-coe.eu/). - You can access all published data without an account. If you want to provide - your own data, please login or register for an account. + raw materials science data**. You can access all published data without an account. + If you want to provide your own data, please login or register for an account. + + You can learn more about on the NOMAD Repository and Archive + [homepage](https://nomad-lab.eu/index.php?page=repo-arch), our + [documentation](${appBase}/docs/index.html). + There is also an [FAQ](https://nomad-lab.eu/index.php?page=repository-archive-faqs) + and the more detailed [uploader documentation](${appBase}/docs/upload.html). `} @@ -217,7 +221,7 @@ export default function About() { There is a [tutorial on how to use the API with plain Python](${appBase}/docs/api_tutorial.html). Another [tutorial covers how to install and use NOMAD's Python client library](${appBase}/docs/archive_tutorial.html). - The [NOMAD Analytics Toolkit](https://analytics-toolkit.nomad-coe.eu) allows to use + The [NOMAD Analytics Toolkit](https://nomad-lab.eu/index.php?page=AIToolkit) allows to use this without installation and directly on NOMAD servers. `} diff --git a/gui/src/components/App.js b/gui/src/components/App.js index d257cc64d4992afb1ec09e558c1cbd3a42c026cb..96feb8cadf22d5406771a41e831c2214a6132f0c 100644 --- a/gui/src/components/App.js +++ b/gui/src/components/App.js @@ -278,7 +278,7 @@ function MainMenu() { /> } /> @@ -456,7 +456,7 @@ class NavigationUnstyled extends React.Component { disableGutters >
- + The NOMAD logo diff --git a/gui/src/components/Quantity.js b/gui/src/components/Quantity.js index c3a5f7cca956d90b4cd6d821ff53e6cda5c29aa0..0cf3d7f21921ba643ad215b387fcaafd122fde59 100644 --- a/gui/src/components/Quantity.js +++ b/gui/src/components/Quantity.js @@ -17,7 +17,10 @@ class Quantity extends React.Component { row: PropTypes.bool, column: PropTypes.bool, data: PropTypes.object, - quantity: PropTypes.string, + quantity: PropTypes.oneOfType([ + PropTypes.string, + PropTypes.func + ]), withClipboard: PropTypes.bool, ellipsisFront: PropTypes.bool } @@ -79,8 +82,18 @@ class Quantity extends React.Component { valueClassName = `${valueClassName} ${classes.ellipsisFront}` } + let value if (!loading) { - const value = data && quantity && _.get(data, quantity) + if (typeof quantity === 'string') { + value = data && quantity && _.get(data, quantity) + } else { + try { + value = quantity(data) + } catch { + value = undefined + } + } + if (children && children.length !== 0) { content = children } else if (value) { @@ -95,12 +108,14 @@ class Quantity extends React.Component { } } + const useLabel = label || (typeof quantity === 'string' ? quantity : 'MISSING LABEL') + if (row || column) { return
{children}
} else { return (
- {label || quantity} + {useLabel}
{loading ? @@ -108,7 +123,7 @@ class Quantity extends React.Component { : content} {withClipboard ? null}> - +
diff --git a/gui/src/components/dft/DFTEntryOverview.js b/gui/src/components/dft/DFTEntryOverview.js index 0c931c6466c6e16f35f142e0359d63b946bf2431..24da010b92d3b4038eb0ebcd66fbb8c554a65d0d 100644 --- a/gui/src/components/dft/DFTEntryOverview.js +++ b/gui/src/components/dft/DFTEntryOverview.js @@ -1,45 +1,63 @@ import React from 'react' import PropTypes from 'prop-types' -import { Typography } from '@material-ui/core' +import { Typography, Button, makeStyles, Tooltip } from '@material-ui/core' import Quantity from '../Quantity' import _ from 'lodash' +import {appBase} from '../../config' -export default class DFTEntryOverview extends React.Component { - static propTypes = { - data: PropTypes.object.isRequired, - loading: PropTypes.bool +const useStyles = makeStyles(theme => ({ + actions: { + marginTop: theme.spacing(1), + textAlign: 'right', + margin: -theme.spacing(1) } +})) - render() { - const { data } = this.props +export default function DFTEntryOverview(props) { + const classes = useStyles() + const {data} = props + if (!data.dft) { + return No metadata available + } - if (!data.dft) { - return No metadata available - } + const material_name = entry => entry.encyclopedia.material.material_name - return ( - - - - - - - - - - - - - - - - - - {_.get(data, 'dft.spacegroup_symbol')} ({_.get(data, 'dft.spacegroup')}) - - + return
+ + + + + + + + + + + + + + + + + + + {_.get(data, 'dft.spacegroup_symbol')} ({_.get(data, 'dft.spacegroup')}) + - ) - } + + {data.encyclopedia && data.encyclopedia.material && +
+ + + +
+ } +
+} + +DFTEntryOverview.propTypes = { + data: PropTypes.object.isRequired } diff --git a/gui/src/components/entry/RepoEntryView.js b/gui/src/components/entry/RepoEntryView.js index c425855b983acff18fb79f9b6067e21545c2a1c8..73bea30c1e566a9114715e5e3dd07d35daba4ac8 100644 --- a/gui/src/components/entry/RepoEntryView.js +++ b/gui/src/components/entry/RepoEntryView.js @@ -138,11 +138,10 @@ class RepoEntryView extends React.Component { - + entry.encyclopedia.material.material_id} label='material id' loading={loading} noWrap {...quantityProps} withClipboard /> - diff --git a/gui/src/components/metaInfoBrowser/MetaInfoBrowser.js b/gui/src/components/metaInfoBrowser/MetaInfoBrowser.js index dd52834622eb2aa4aed8ce47cc3e93381135aede..053973626898954d44a537fb791fada8f34f0335 100644 --- a/gui/src/components/metaInfoBrowser/MetaInfoBrowser.js +++ b/gui/src/components/metaInfoBrowser/MetaInfoBrowser.js @@ -8,6 +8,7 @@ import MetainfoSearch from './MetainfoSearch' import { FormControl, Select, Input, MenuItem, ListItemText, InputLabel, makeStyles } from '@material-ui/core' import { schema } from '../MetaInfoRepository' import { errorContext } from '../errors' +import { appBase } from '../../config' export const help = ` The NOMAD *metainfo* defines all quantities used to represent archive data in @@ -37,7 +38,7 @@ reference (blue) relations. If you bookmark this page, you can save the definition represented by the highlighted *main* card. -To learn more about the meta-info, visit the [meta-info homepage](https://metainfo.nomad-coe.eu/nomadmetainfo_public/archive.html). +To learn more about the meta-info, visit the [meta-info documentation](${appBase}/docs/metainfo.html). ` const MenuProps = { PaperProps: { diff --git a/gui/src/components/uploads/UploadPage.js b/gui/src/components/uploads/UploadPage.js index 68e4050daa34b69a647140fc767c085eebceba33..17f1494937833775e41b631e0dd96829f0a9e046 100644 --- a/gui/src/components/uploads/UploadPage.js +++ b/gui/src/components/uploads/UploadPage.js @@ -1,7 +1,7 @@ import React from 'react' import PropTypes, { instanceOf } from 'prop-types' import Markdown from '../Markdown' -import { withStyles, Paper, IconButton, FormGroup, FormLabel, Tooltip, Typography } from '@material-ui/core' +import { withStyles, Paper, IconButton, FormGroup, FormLabel, Tooltip, Typography, Link } from '@material-ui/core' import UploadIcon from '@material-ui/icons/CloudUpload' import Dropzone from 'react-dropzone' import Upload from './Upload' @@ -14,13 +14,13 @@ import { withApi } from '../api' import { withCookies, Cookies } from 'react-cookie' import Pagination from 'material-ui-flat-pagination' import { CopyToClipboard } from 'react-copy-to-clipboard' -import { guiBase } from '../../config' +import { guiBase, appBase } from '../../config' import qs from 'qs' import { CodeList } from '../About' export const help = ` NOMAD allows you to upload data. After upload, NOMAD will process your data: it will -identify the main output files of [supported codes](https://www.nomad-coe.eu/the-project/nomad-repository/nomad-repository-howtoupload) +identify the main output files of supported codes. and then it will parse these files. The result will be a list of entries (one per each identified mainfile). Each entry is associated with metadata. This is data that NOMAD acquired from your files and that describe your calculations (e.g. chemical formula, used code, system type and symmetry, etc.). @@ -275,6 +275,8 @@ class UploadPage extends React.Component { NOMAD will search through all files and identify the relevant files automatically. Each uploaded file can be up to 32GB in size, you can have up to 10 unpublished uploads simultaneously. Your uploaded data is not published right away. + Find more details about uploading data in our documentation or visit + our FAQs. The following codes are supported: . diff --git a/gui/src/config.js b/gui/src/config.js index a5bbcf59ada5f6e10ff0632696581948de0b9ac3..f6978b1f1851c9c95ad1e0460316b9aeb0926b14 100644 --- a/gui/src/config.js +++ b/gui/src/config.js @@ -3,7 +3,7 @@ import { createMuiTheme } from '@material-ui/core' window.nomadEnv = window.nomadEnv || {} export const version = window.nomadEnv.version export const appBase = window.nomadEnv.appBase.replace(/\/$/, '') -// export const apiBase = 'http://repository.nomad-coe.eu/v0.8/api' +// export const apiBase = 'http://nomad-lab.eu/prod/rae/api' export const apiBase = `${appBase}/api` export const optimadeBase = `${appBase}/optimade` export const guiBase = process.env.PUBLIC_URL @@ -19,7 +19,7 @@ export const maxLogsToShow = 50 export const consent = ` By using this web-site and by uploading and downloading data, you agree to the -[terms of use](https://www.nomad-coe.eu/the-project/nomad-repository/nomad-repository-terms). +[terms of use](https://nomad-lab.eu/index.php?page=terms). Uploaded data is licensed under the Creative Commons Attribution license ([CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)). You can publish diff --git a/gui/src/index.js b/gui/src/index.js index b885e2dc2e289f34747ffe696e35a7b3a257bcbf..009aa7cf069f42756f6f7d4117f5d7b94667a1c1 100644 --- a/gui/src/index.js +++ b/gui/src/index.js @@ -18,7 +18,7 @@ export const matomo = matomoEnabled ? PiwikReactRouter({ siteId: matomoSiteId, clientTrackerName: 'stat.js', serverTrackerName: 'stat' -}) : null +}) : [] const keycloak = Keycloak({ url: keycloakBase, diff --git a/nomad/app/api/info.py b/nomad/app/api/info.py index ce67893f0063c05f5a88fcc1cbdaf7510027699a..c5d07d6cfc343a4904239e2fe93d371fff566210 100644 --- a/nomad/app/api/info.py +++ b/nomad/app/api/info.py @@ -20,7 +20,8 @@ from typing import Dict, Any from flask_restplus import Resource, fields from datetime import datetime -from nomad import config, parsing, normalizing, datamodel, gitinfo, search +from nomad import config, normalizing, datamodel, gitinfo, search +from nomad.parsing import parsers, MatchingParser from .api import api @@ -94,8 +95,8 @@ class InfoResource(Resource): def get(self): ''' Return information about the nomad backend and its configuration. ''' codes_dict = {} - for parser in parsing.parser_dict.values(): - if isinstance(parser, parsing.MatchingParser) and parser.domain == 'dft': + for parser in parsers.parser_dict.values(): + if isinstance(parser, MatchingParser) and parser.domain == 'dft': code_name = parser.code_name if code_name in codes_dict: continue @@ -105,10 +106,10 @@ class InfoResource(Resource): return { 'parsers': [ key[key.index('/') + 1:] - for key in parsing.parser_dict.keys()], + for key in parsers.parser_dict.keys()], 'metainfo_packages': ['general', 'general.experimental', 'common', 'public'] + sorted([ key[key.index('/') + 1:] - for key in parsing.parser_dict.keys()]), + for key in parsers.parser_dict.keys()]), 'codes': codes, 'normalizers': [normalizer.__name__ for normalizer in normalizing.normalizers], 'statistics': statistics(), diff --git a/nomad/app/api/metainfo.py b/nomad/app/api/metainfo.py index 1b0979f16514eadb1115a4e67357eabb60f0b6eb..be6d05cd38f1b1300287251a2c5ae4b50987e104 100644 --- a/nomad/app/api/metainfo.py +++ b/nomad/app/api/metainfo.py @@ -22,7 +22,7 @@ import importlib from nomad.metainfo.legacy import python_package_mapping, LegacyMetainfoEnvironment from nomad.metainfo import Package -from nomad.parsing import parsers +from nomad.parsing.parsers import parsers from .api import api diff --git a/nomad/cli/__init__.py b/nomad/cli/__init__.py index 8677e47c2a916f9fb66bf31a83cd7fd150aec2e3..e7dc49675101d7a9b4435591b7b99224758ab5a7 100644 --- a/nomad/cli/__init__.py +++ b/nomad/cli/__init__.py @@ -36,6 +36,7 @@ lazy_import.lazy_module('nomad.config') lazy_import.lazy_module('nomad.infrastructure') lazy_import.lazy_module('nomad.utils') lazy_import.lazy_module('nomad.parsing') +lazy_import.lazy_module('nomad.parsing.parsers') lazy_import.lazy_module('nomad.normalizing') lazy_import.lazy_module('nomad.datamodel') lazy_import.lazy_module('nomad.search') @@ -46,5 +47,5 @@ lazy_import.lazy_module('nomad.client') lazy_import.lazy_module('nomadcore') lazy_import.lazy_module('nomadcore.simple_parser') -from . import dev, parse, admin, client # noqa +from . import dev, admin, parse, client # noqa from .cli import run_cli, cli # noqa diff --git a/nomad/cli/client/__init__.py b/nomad/cli/client/__init__.py index 05fba3ed7065e48c6fa0ea8628cca62a72de7a20..081775c168a5bfb28f1bc917de3b22e0fa973af4 100644 --- a/nomad/cli/client/__init__.py +++ b/nomad/cli/client/__init__.py @@ -45,6 +45,7 @@ lazy_import.lazy_module('nomad.files') lazy_import.lazy_module('nomad.search') lazy_import.lazy_module('nomad.datamodel') lazy_import.lazy_module('nomad.parsing') +lazy_import.lazy_module('nomad.parsing.parsers') lazy_import.lazy_module('nomad.infrastructure') lazy_import.lazy_module('nomad.doi') lazy_import.lazy_module('nomad.client') diff --git a/nomad/cli/client/statistics.py b/nomad/cli/client/statistics.py index 02812074ef201ee904fd3bd1e2c394dc305a099c..3b94ece86be6f87df48e3778e8e63a9bf441e363 100644 --- a/nomad/cli/client/statistics.py +++ b/nomad/cli/client/statistics.py @@ -537,7 +537,7 @@ def statistics_table(html, geometries, public_path):

For more and interactive statistics, use the metadata view of - the NOMAD Repository and Archvi search. + the NOMAD Repository and Archvi search.

90% of VASP calculations are provided by @@ -554,7 +554,7 @@ def statistics_table(html, geometries, public_path):

The archive data is represented in a code-independent, structured form. The archive structure and all quantities are described via the - NOMAD Metainfo. + NOMAD Metainfo. The NOMAD Metainfo defines a conceptual model to store the values connected to atomistic or ab initio calculations. A clear and usable metadata definition is a prerequisites to preparing the data for analysis that everybody diff --git a/nomad/cli/parse.py b/nomad/cli/parse.py index 445afa554b5845b24c260fd6ad9d61a6f1f8719a..cadec1d8dbea1979337a44dd0db4a98b18b6575f 100644 --- a/nomad/cli/parse.py +++ b/nomad/cli/parse.py @@ -4,10 +4,8 @@ import json import click import sys -from nomad import utils -from nomad import parsing -from nomad import normalizing -from nomad import datamodel +from nomad import utils, parsing, normalizing, datamodel + import nomadcore from .cli import cli @@ -22,15 +20,16 @@ def parse( Run the given parser on the downloaded calculation. If no parser is given, do parser matching and use the respective parser. ''' + from nomad.parsing import parsers mainfile = os.path.basename(mainfile_path) if logger is None: logger = utils.get_logger(__name__) if parser_name is not None: - parser = parsing.parser_dict.get(parser_name) + parser = parsers.parser_dict.get(parser_name) assert parser is not None, 'the given parser must exist' else: - parser = parsing.match_parser(mainfile_path, strict=strict) + parser = parsers.match_parser(mainfile_path, strict=strict) if isinstance(parser, parsing.MatchingParser): parser_name = parser.name else: diff --git a/nomad/config.py b/nomad/config.py index fe9907f3602e09d0a455a170ed1c7a9be19fce12..69a41bdcbfbbf7409b5c50a3209d018671b25252 100644 --- a/nomad/config.py +++ b/nomad/config.py @@ -120,7 +120,7 @@ elastic = NomadConfig( ) keycloak = NomadConfig( - server_url='https://repository.nomad-coe.eu/fairdi/keycloak/auth/', + server_url='https://nomad-lab.eu/fairdi/keycloak/auth/', realm_name='fairdi_nomad_test', username='admin', password='password', @@ -250,7 +250,7 @@ normalize = NomadConfig( client = NomadConfig( user='leonard.hofstadter@nomad-fairdi.tests.de', password='password', - url='http://repository.nomad-coe.eu/app/api' + url='http://nomad-lab.eu/prod/rae/api' ) datacite = NomadConfig( @@ -262,14 +262,14 @@ datacite = NomadConfig( ) meta = NomadConfig( - version='0.8.2', + version='0.8.3', commit=gitinfo.commit, release='devel', default_domain='dft', service='unknown nomad service', name='novel materials discovery (NOMAD)', description='A FAIR data sharing platform for materials science data', - homepage='https://repository.nomad-coe.eu/v0.8', + homepage='https://https://nomad-lab.eu', source_url='https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR', maintainer_email='markus.scheidgen@physik.hu-berlin.de' ) diff --git a/nomad/datamodel/dft.py b/nomad/datamodel/dft.py index 5e5ae809bff908e830b135a594e4b15b196af261..6b117a39539ac9ae464d8e3713695bf0f3a31fb4 100644 --- a/nomad/datamodel/dft.py +++ b/nomad/datamodel/dft.py @@ -266,7 +266,7 @@ class DFTMetadata(MSection): def code_name_from_parser(self): entry = self.m_parent if entry.parser_name is not None: - from nomad.parsing import parser_dict + from nomad.parsing.parsers import parser_dict parser = parser_dict.get(entry.parser_name) if hasattr(parser, 'code_name'): return parser.code_name diff --git a/nomad/doi.py b/nomad/doi.py index 8582a2fd125a0af658a8382f77e461e0cb98720a..c4a4ef25e34dc8e6b020af459054df90c63fa633 100644 --- a/nomad/doi.py +++ b/nomad/doi.py @@ -30,7 +30,7 @@ from nomad import config, utils def edit_url(doi: str, url: str = None): ''' Changes the URL of an already findable DOI. ''' if url is None: - url = 'https://repository.nomad-coe.eu/app/gui/datasets/doi/%s' % doi + url = 'https://nomad-lab.eu/prod/rae/gui/datasets/doi/%s' % doi metadata_url = '%s/doi/%s' % (config.datacite.mds_host, doi) response = requests.put( diff --git a/nomad/metainfo/example.py b/nomad/metainfo/example.py index e7ad6f57cc2720c2b895ed95191b39d3873206fe..ca57f5951974b2b0cb74aa2788e7eda1f90c8eab 100644 --- a/nomad/metainfo/example.py +++ b/nomad/metainfo/example.py @@ -22,7 +22,7 @@ from nomad.metainfo import ( MSection, MCategory, Section, Quantity, Package, SubSection, MEnum, Datetime, constraint) -m_package = Package(links=['http://metainfo.nomad-coe.eu']) +m_package = Package(links=['https://nomad-lab.eu/prod/rae/docs/metainfo.html']) class SystemHash(MCategory): diff --git a/nomad/parsing/__init__.py b/nomad/parsing/__init__.py index 942e0f5a79bd979d73631ab163199813361a0b95..4cfa35be822e101485dcf5f5016b0151ad5ea18f 100644 --- a/nomad/parsing/__init__.py +++ b/nomad/parsing/__init__.py @@ -50,14 +50,14 @@ The implementation :class:`LegacyParser` is used for most NOMAD-coe parsers. The parser definitions are available via the following two variables. -.. autodata:: nomad.parsing.parsers -.. autodata:: nomad.parsing.parser_dict +.. autodata:: nomad.parsing.parsers.parsers +.. autodata:: nomad.parsing.parsers.parser_dict Parsers are reused for multiple calculations. Parsers and calculation files are matched via regular expressions. -.. autofunction:: nomad.parsing.match_parser +.. autofunction:: nomad.parsing.parsers.match_parser Parsers in NOMAD-coe use a *backend* to create output. There are different NOMAD-coe basends. In nomad@FAIRDI, we only currently only use a single backed. The following @@ -70,503 +70,6 @@ based on nomad@fairdi's metainfo: :members: ''' -from typing import Callable, IO, Union, Dict -import os.path - -from nomad import config, datamodel - -from nomad.parsing.legacy import ( - AbstractParserBackend, Backend, BackendError, BadContextUri, LegacyParser, VaspOutcarParser) +from nomad.parsing.legacy import AbstractParserBackend, Backend, BackendError, BadContextUri, LegacyParser from nomad.parsing.parser import Parser, BrokenParser, MissingParser, MatchingParser -from nomad.parsing.artificial import ( - TemplateParser, GenerateRandomParser, ChaosParser, EmptyParser) -from eelsparser import EelsParser -from mpesparser import MPESParser -from aptfimparser import APTFIMParser - -try: - # these packages are not available without parsing extra, which is ok, if the - # parsers are only initialized to load their metainfo definitions - import magic - import gzip - import bz2 - import lzma - - _compressions = { - b'\x1f\x8b\x08': ('gz', gzip.open), - b'\x42\x5a\x68': ('bz2', bz2.open), - b'\xfd\x37\x7a': ('xz', lzma.open) - } - - encoding_magic = magic.Magic(mime_encoding=True) - -except ImportError: - pass - - -def match_parser(mainfile_path: str, strict=True) -> 'Parser': - ''' - Performs parser matching. This means it take the given mainfile and potentially - opens it with the given callback and tries to identify a parser that can parse - the file. - - This is determined by filename (e.g. *.out), mime type (e.g. text/*, application/xml), - and beginning file contents. - - Arguments: - mainfile_path: Path to the mainfile - strict: Only match strict parsers, e.g. no artificial parsers for missing or empty entries. - - Returns: The parser, or None if no parser could be matched. - ''' - mainfile = os.path.basename(mainfile_path) - if mainfile.startswith('.') or mainfile.startswith('~'): - return None - - with open(mainfile_path, 'rb') as f: - compression, open_compressed = _compressions.get(f.read(3), (None, open)) - - with open_compressed(mainfile_path, 'rb') as cf: # type: ignore - buffer = cf.read(config.parser_matching_size) - - mime_type = magic.from_buffer(buffer, mime=True) - - decoded_buffer = None - encoding = None - try: # Try to open the file as a string for regex matching. - decoded_buffer = buffer.decode('utf-8') - except UnicodeDecodeError: - # This file is either binary or has wrong encoding - encoding = encoding_magic.from_buffer(buffer) - - if config.services.force_raw_file_decoding: - encoding = 'iso-8859-1' - - if encoding in ['iso-8859-1']: - try: - decoded_buffer = buffer.decode(encoding) - except Exception: - pass - - for parser in parsers: - if strict and isinstance(parser, (MissingParser, EmptyParser)): - continue - - if parser.is_mainfile(mainfile_path, mime_type, buffer, decoded_buffer, compression): - # potentially convert the file - if encoding in ['iso-8859-1']: - try: - with open(mainfile_path, 'rb') as binary_file: - content = binary_file.read().decode(encoding) - except Exception: - pass - else: - with open(mainfile_path, 'wt') as text_file: - text_file.write(content) - - # TODO: deal with multiple possible parser specs - return parser - - return None - - -parsers = [ - GenerateRandomParser(), - TemplateParser(), - ChaosParser(), - LegacyParser( - name='parsers/phonopy', code_name='Phonopy', code_homepage='https://phonopy.github.io/phonopy/', - parser_class_name='phonopyparser.PhonopyParserWrapper', - # mainfile_contents_re=r'', # Empty regex since this code calls other DFT codes. - mainfile_name_re=(r'.*/phonopy-FHI-aims-displacement-0*1/control.in$') - ), - LegacyParser( - name='parsers/vasp', code_name='VASP', code_homepage='https://www.vasp.at/', - parser_class_name='vaspparser.VASPRunParser', - mainfile_mime_re=r'(application/.*)|(text/.*)', - mainfile_contents_re=( - r'^\s*<\?xml version="1\.0" encoding="ISO-8859-1"\?>\s*' - r'?\s*' - r'?\s*' - r'?\s*\s*vasp\s*' - r'?'), - supported_compressions=['gz', 'bz2', 'xz'] - ), - VaspOutcarParser( - name='parsers/vasp-outcar', code_name='VASP', code_homepage='https://www.vasp.at/', - parser_class_name='vaspparser.VaspOutcarParser', - mainfile_name_re=r'(.*/)?OUTCAR(\.[^\.]*)?', - mainfile_contents_re=(r'^\svasp\.') - ), - LegacyParser( - name='parsers/exciting', code_name='exciting', code_homepage='http://exciting-code.org/', - parser_class_name='excitingparser.ExcitingParser', - mainfile_name_re=r'^.*.OUT(\.[^/]*)?$', - mainfile_contents_re=(r'EXCITING.*started') - ), - LegacyParser( - name='parsers/fhi-aims', code_name='FHI-aims', code_homepage='https://aimsclub.fhi-berlin.mpg.de/', - parser_class_name='fhiaimsparser.FHIaimsParser', - mainfile_contents_re=( - r'^(.*\n)*' - r'?\s*Invoking FHI-aims \.\.\.' - # r'?\s*Version' - ) - ), - LegacyParser( - name='parsers/cp2k', code_name='CP2K', code_homepage='https://www.cp2k.org/', - parser_class_name='cp2kparser.CP2KParser', - mainfile_contents_re=( - r'\*\*\*\* \*\*\*\* \*\*\*\*\*\* \*\* PROGRAM STARTED AT\s.*\n' - r' \*\*\*\*\* \*\* \*\*\* \*\*\* \*\* PROGRAM STARTED ON\s*.*\n' - r' \*\* \*\*\*\* \*\*\*\*\*\* PROGRAM STARTED BY .*\n' - r' \*\*\*\*\* \*\* \*\* \*\* \*\* PROGRAM PROCESS ID .*\n' - r' \*\*\*\* \*\* \*\*\*\*\*\*\* \*\* PROGRAM STARTED IN .*\n' - ) - ), - LegacyParser( - name='parsers/crystal', code_name='Crystal', code_homepage='https://www.crystal.unito.it/', - parser_class_name='crystalparser.CrystalParser', - mainfile_contents_re=( - r'(CRYSTAL\s*\n\d+ \d+ \d+)|(CRYSTAL will run on \d+ processors)|' - r'(\s*\*\s*CRYSTAL[\d]+\s*\*\s*\*\s*(public|Release) \: [\d\.]+.*\*)|' - r'(Executable:\s*[/_\-a-zA-Z0-9]*MPPcrystal)' - ) - ), - # The main contents regex of CPMD was causing a catostrophic backtracking issue - # when searching through the first 500 bytes of main files. We decided - # to use only a portion of the regex to avoid that issue. - LegacyParser( - name='parsers/cpmd', code_name='CPMD', code_homepage='https://www.lcrc.anl.gov/for-users/software/available-software/cpmd/', - parser_class_name='cpmdparser.CPMDParser', - mainfile_contents_re=( - # r'\s+\*\*\*\*\*\* \*\*\*\*\*\* \*\*\*\* \*\*\*\* \*\*\*\*\*\*\s*' - # r'\s+\*\*\*\*\*\*\* \*\*\*\*\*\*\* \*\*\*\*\*\*\*\*\*\* \*\*\*\*\*\*\*\s+' - r'\*\*\* \*\* \*\*\* \*\* \*\*\*\* \*\* \*\* \*\*\*' - # r'\s+\*\* \*\* \*\*\* \*\* \*\* \*\* \*\* \*\*\s+' - # r'\s+\*\* \*\*\*\*\*\*\* \*\* \*\* \*\* \*\*\s+' - # r'\s+\*\*\* \*\*\*\*\*\* \*\* \*\* \*\* \*\*\*\s+' - # r'\s+\*\*\*\*\*\*\* \*\* \*\* \*\* \*\*\*\*\*\*\*\s+' - # r'\s+\*\*\*\*\*\* \*\* \*\* \*\* \*\*\*\*\*\*\s+' - ) - ), - LegacyParser( - name='parsers/nwchem', code_name='NWChem', code_homepage='http://www.nwchem-sw.org/', - parser_class_name='nwchemparser.NWChemParser', - mainfile_contents_re=( - r'Northwest Computational Chemistry Package \(NWChem\) (\d+\.)+\d+' - ) - ), - LegacyParser( - name='parsers/bigdft', code_name='BigDFT', code_homepage='http://bigdft.org/', - parser_class_name='bigdftparser.BigDFTParser', - mainfile_contents_re=( - # r'__________________________________ A fast and precise DFT wavelet code\s*' - # r'\| \| \| \| \| \|\s*' - # r'\| \| \| \| \| \| BBBB i gggggg\s*' - # r'\|_____\|_____\|_____\|_____\|_____\| B B g\s*' - # r'\| \| : \| : \| \| \| B B i g\s*' - # r'\| \|-0\+--\|-0\+--\| \| \| B B i g g\s*' - r'\|_____\|__:__\|__:__\|_____\|_____\|___ BBBBB i g g\s*' - # r'\| : \| \| \| : \| \| B B i g g\s*' - # r'\|--\+0-\| \| \|-0\+--\| \| B B iiii g g\s*' - # r'\|__:__\|_____\|_____\|__:__\|_____\| B B i g g\s*' - # r'\| \| : \| : \| \| \| B BBBB i g g\s*' - # r'\| \|-0\+--\|-0\+--\| \| \| B iiiii gggggg\s*' - # r'\|_____\|__:__\|__:__\|_____\|_____\|__BBBBB\s*' - # r'\| \| \| \| : \| \| TTTTTTTTT\s*' - # r'\| \| \| \|--\+0-\| \| DDDDDD FFFFF T\s*' - # r'\|_____\|_____\|_____\|__:__\|_____\| D D F TTTT T\s*' - # r'\| \| \| \| : \| \|D D F T T\s*' - # r'\| \| \| \|--\+0-\| \|D D FFFF T T\s*' - # r'\|_____\|_____\|_____\|__:__\|_____\|D___ D F T T\s*' - # r'\| \| \| : \| \| \|D D F TTTTT\s*' - # r'\| \| \|--\+0-\| \| \| D D F T T\s*' - # r'\|_____\|_____\|__:__\|_____\|_____\| D F T T\s*' - # r'\| \| \| \| \| \| D T T\s*' - # r'\| \| \| \| \| \| DDDDDD F TTTT\s*' - # r'\|_____\|_____\|_____\|_____\|_____\|______ www\.bigdft\.org' - ) - ), - LegacyParser( - name='parsers/wien2k', code_name='WIEN2k', code_homepage='http://www.wien2k.at/', - parser_class_name='wien2kparser.Wien2kParser', - mainfile_contents_re=r'\s*---------\s*:ITE[0-9]+:\s*[0-9]+\.\s*ITERATION\s*---------' - ), - LegacyParser( - name='parsers/band', code_name='BAND', code_homepage='https://www.scm.com/product/band_periodicdft/', - parser_class_name='bandparser.BANDParser', - mainfile_contents_re=r' +\* +Amsterdam Density Functional +\(ADF\)'), - LegacyParser( - name='parsers/gaussian', code_name='Gaussian', code_homepage='http://gaussian.com/', - parser_class_name='gaussianparser.GaussianParser', - mainfile_mime_re=r'.*', - mainfile_contents_re=( - r'\s*Cite this work as:' - r'\s*Gaussian [0-9]+, Revision [A-Za-z0-9\.]*,') - ), - LegacyParser( - name='parsers/quantumespresso', code_name='Quantum Espresso', code_homepage='https://www.quantum-espresso.org/', - parser_class_name='quantumespressoparser.QuantumEspressoParserPWSCF', - mainfile_contents_re=( - r'(Program PWSCF.*starts)|' - r'(Current dimensions of program PWSCF are)') - # r'^(.*\n)*' - # r'\s*Program (\S+)\s+v\.(\S+)(?:\s+\(svn\s+rev\.\s+' - # r'(\d+)\s*\))?\s+starts[^\n]+' - # r'(?:\s*\n?)*This program is part of the open-source Quantum') - ), - LegacyParser( - name='parsers/abinit', code_name='ABINIT', code_homepage='https://www.abinit.org/', - parser_class_name='abinitparser.AbinitParser', - mainfile_contents_re=(r'^\n*\.Version\s*[0-9.]*\s*of ABINIT\s*') - ), - LegacyParser( - name='parsers/orca', code_name='ORCA', code_homepage='https://orcaforum.kofo.mpg.de/', - parser_class_name='orcaparser.OrcaParser', - mainfile_contents_re=( - r'\s+\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\**\s*' - r'\s+\* O R C A \*\s*' - r'\s+\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\**\s*' - r'\s*' - r'\s*--- An Ab Initio, DFT and Semiempirical electronic structure package ---\s*') - ), - LegacyParser( - name='parsers/castep', code_name='CASTEP', code_homepage='http://www.castep.org/', - parser_class_name='castepparser.CastepParser', - mainfile_contents_re=(r'\s\|\s*CCC\s*AA\s*SSS\s*TTTTT\s*EEEEE\s*PPPP\s*\|\s*') - ), - LegacyParser( - name='parsers/dl-poly', code_name='DL_POLY', code_homepage='https://www.scd.stfc.ac.uk/Pages/DL_POLY.aspx', - parser_class_name='dlpolyparser.DlPolyParserWrapper', - mainfile_contents_re=(r'\*\* DL_POLY \*\*') - ), - LegacyParser( - name='parsers/lib-atoms', code_name='libAtoms', code_homepage='https://libatoms.github.io/', - parser_class_name='libatomsparser.LibAtomsParserWrapper', - mainfile_contents_re=(r'\s*', e.g. 'parsers/vasp'. ''' - -# renamed parsers -parser_dict['parser/broken'] = parser_dict['parsers/broken'] -parser_dict['parser/fleur'] = parser_dict['parsers/fleur'] -parser_dict['parser/molcas'] = parser_dict['parsers/molcas'] -parser_dict['parser/octopus'] = parser_dict['parsers/octopus'] -parser_dict['parser/onetep'] = parser_dict['parsers/onetep'] - -# register code names as possible statistic value to the dft datamodel -code_names = sorted( - set([ - getattr(parser, 'code_name') - for parser in parsers - if parser.domain == 'dft' and getattr(parser, 'code_name', None) is not None and getattr(parser, 'code_name') != 'currupted mainfile']), - key=lambda code_name: code_name.lower()) -datamodel.DFTMetadata.code_name.a_search.statistic_values = code_names + [config.services.unavailable_value, config.services.not_processed_value] +from nomad.parsing.artificial import TemplateParser, GenerateRandomParser, ChaosParser, EmptyParser diff --git a/nomad/parsing/parsers.py b/nomad/parsing/parsers.py new file mode 100644 index 0000000000000000000000000000000000000000..c477e0dcbe9d706f6212a4c418f8b25aa91aefce --- /dev/null +++ b/nomad/parsing/parsers.py @@ -0,0 +1,513 @@ +# Copyright 2018 Markus Scheidgen +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an"AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +import os.path + +from nomad import config, datamodel + +from .parser import MissingParser, BrokenParser, Parser +from .legacy import LegacyParser, VaspOutcarParser +from .artificial import EmptyParser, GenerateRandomParser, TemplateParser, ChaosParser + +from eelsparser import EelsParser +from mpesparser import MPESParser +from aptfimparser import APTFIMParser + +try: + # these packages are not available without parsing extra, which is ok, if the + # parsers are only initialized to load their metainfo definitions + import magic + import gzip + import bz2 + import lzma + + _compressions = { + b'\x1f\x8b\x08': ('gz', gzip.open), + b'\x42\x5a\x68': ('bz2', bz2.open), + b'\xfd\x37\x7a': ('xz', lzma.open) + } + + encoding_magic = magic.Magic(mime_encoding=True) + +except ImportError: + pass + + +def match_parser(mainfile_path: str, strict=True) -> Parser: + ''' + Performs parser matching. This means it take the given mainfile and potentially + opens it with the given callback and tries to identify a parser that can parse + the file. + + This is determined by filename (e.g. *.out), mime type (e.g. text/*, application/xml), + and beginning file contents. + + Arguments: + mainfile_path: Path to the mainfile + strict: Only match strict parsers, e.g. no artificial parsers for missing or empty entries. + + Returns: The parser, or None if no parser could be matched. + ''' + mainfile = os.path.basename(mainfile_path) + if mainfile.startswith('.') or mainfile.startswith('~'): + return None + + with open(mainfile_path, 'rb') as f: + compression, open_compressed = _compressions.get(f.read(3), (None, open)) + + with open_compressed(mainfile_path, 'rb') as cf: # type: ignore + buffer = cf.read(config.parser_matching_size) + + mime_type = magic.from_buffer(buffer, mime=True) + + decoded_buffer = None + encoding = None + try: # Try to open the file as a string for regex matching. + decoded_buffer = buffer.decode('utf-8') + except UnicodeDecodeError: + # This file is either binary or has wrong encoding + encoding = encoding_magic.from_buffer(buffer) + + if config.services.force_raw_file_decoding: + encoding = 'iso-8859-1' + + if encoding in ['iso-8859-1']: + try: + decoded_buffer = buffer.decode(encoding) + except Exception: + pass + + for parser in parsers: + if strict and isinstance(parser, (MissingParser, EmptyParser)): + continue + + if parser.is_mainfile(mainfile_path, mime_type, buffer, decoded_buffer, compression): + # potentially convert the file + if encoding in ['iso-8859-1']: + try: + with open(mainfile_path, 'rb') as binary_file: + content = binary_file.read().decode(encoding) + except Exception: + pass + else: + with open(mainfile_path, 'wt') as text_file: + text_file.write(content) + + # TODO: deal with multiple possible parser specs + return parser + + return None + + +parsers = [ + GenerateRandomParser(), + TemplateParser(), + ChaosParser(), + LegacyParser( + name='parsers/phonopy', code_name='Phonopy', code_homepage='https://phonopy.github.io/phonopy/', + parser_class_name='phonopyparser.PhonopyParserWrapper', + # mainfile_contents_re=r'', # Empty regex since this code calls other DFT codes. + mainfile_name_re=(r'.*/phonopy-FHI-aims-displacement-0*1/control.in$') + ), + LegacyParser( + name='parsers/vasp', code_name='VASP', code_homepage='https://www.vasp.at/', + parser_class_name='vaspparser.VASPRunParser', + mainfile_mime_re=r'(application/.*)|(text/.*)', + mainfile_contents_re=( + r'^\s*<\?xml version="1\.0" encoding="ISO-8859-1"\?>\s*' + r'?\s*' + r'?\s*' + r'?\s*\s*vasp\s*' + r'?'), + supported_compressions=['gz', 'bz2', 'xz'] + ), + VaspOutcarParser( + name='parsers/vasp-outcar', code_name='VASP', code_homepage='https://www.vasp.at/', + parser_class_name='vaspparser.VaspOutcarParser', + mainfile_name_re=r'(.*/)?OUTCAR(\.[^\.]*)?', + mainfile_contents_re=(r'^\svasp\.') + ), + LegacyParser( + name='parsers/exciting', code_name='exciting', code_homepage='http://exciting-code.org/', + parser_class_name='excitingparser.ExcitingParser', + mainfile_name_re=r'^.*.OUT(\.[^/]*)?$', + mainfile_contents_re=(r'EXCITING.*started') + ), + LegacyParser( + name='parsers/fhi-aims', code_name='FHI-aims', code_homepage='https://aimsclub.fhi-berlin.mpg.de/', + parser_class_name='fhiaimsparser.FHIaimsParser', + mainfile_contents_re=( + r'^(.*\n)*' + r'?\s*Invoking FHI-aims \.\.\.' + # r'?\s*Version' + ) + ), + LegacyParser( + name='parsers/cp2k', code_name='CP2K', code_homepage='https://www.cp2k.org/', + parser_class_name='cp2kparser.CP2KParser', + mainfile_contents_re=( + r'\*\*\*\* \*\*\*\* \*\*\*\*\*\* \*\* PROGRAM STARTED AT\s.*\n' + r' \*\*\*\*\* \*\* \*\*\* \*\*\* \*\* PROGRAM STARTED ON\s*.*\n' + r' \*\* \*\*\*\* \*\*\*\*\*\* PROGRAM STARTED BY .*\n' + r' \*\*\*\*\* \*\* \*\* \*\* \*\* PROGRAM PROCESS ID .*\n' + r' \*\*\*\* \*\* \*\*\*\*\*\*\* \*\* PROGRAM STARTED IN .*\n' + ) + ), + LegacyParser( + name='parsers/crystal', code_name='Crystal', code_homepage='https://www.crystal.unito.it/', + parser_class_name='crystalparser.CrystalParser', + mainfile_contents_re=( + r'(CRYSTAL\s*\n\d+ \d+ \d+)|(CRYSTAL will run on \d+ processors)|' + r'(\s*\*\s*CRYSTAL[\d]+\s*\*\s*\*\s*(public|Release) \: [\d\.]+.*\*)|' + r'(Executable:\s*[/_\-a-zA-Z0-9]*MPPcrystal)' + ) + ), + # The main contents regex of CPMD was causing a catostrophic backtracking issue + # when searching through the first 500 bytes of main files. We decided + # to use only a portion of the regex to avoid that issue. + LegacyParser( + name='parsers/cpmd', code_name='CPMD', code_homepage='https://www.lcrc.anl.gov/for-users/software/available-software/cpmd/', + parser_class_name='cpmdparser.CPMDParser', + mainfile_contents_re=( + # r'\s+\*\*\*\*\*\* \*\*\*\*\*\* \*\*\*\* \*\*\*\* \*\*\*\*\*\*\s*' + # r'\s+\*\*\*\*\*\*\* \*\*\*\*\*\*\* \*\*\*\*\*\*\*\*\*\* \*\*\*\*\*\*\*\s+' + r'\*\*\* \*\* \*\*\* \*\* \*\*\*\* \*\* \*\* \*\*\*' + # r'\s+\*\* \*\* \*\*\* \*\* \*\* \*\* \*\* \*\*\s+' + # r'\s+\*\* \*\*\*\*\*\*\* \*\* \*\* \*\* \*\*\s+' + # r'\s+\*\*\* \*\*\*\*\*\* \*\* \*\* \*\* \*\*\*\s+' + # r'\s+\*\*\*\*\*\*\* \*\* \*\* \*\* \*\*\*\*\*\*\*\s+' + # r'\s+\*\*\*\*\*\* \*\* \*\* \*\* \*\*\*\*\*\*\s+' + ) + ), + LegacyParser( + name='parsers/nwchem', code_name='NWChem', code_homepage='http://www.nwchem-sw.org/', + parser_class_name='nwchemparser.NWChemParser', + mainfile_contents_re=( + r'Northwest Computational Chemistry Package \(NWChem\) (\d+\.)+\d+' + ) + ), + LegacyParser( + name='parsers/bigdft', code_name='BigDFT', code_homepage='http://bigdft.org/', + parser_class_name='bigdftparser.BigDFTParser', + mainfile_contents_re=( + # r'__________________________________ A fast and precise DFT wavelet code\s*' + # r'\| \| \| \| \| \|\s*' + # r'\| \| \| \| \| \| BBBB i gggggg\s*' + # r'\|_____\|_____\|_____\|_____\|_____\| B B g\s*' + # r'\| \| : \| : \| \| \| B B i g\s*' + # r'\| \|-0\+--\|-0\+--\| \| \| B B i g g\s*' + r'\|_____\|__:__\|__:__\|_____\|_____\|___ BBBBB i g g\s*' + # r'\| : \| \| \| : \| \| B B i g g\s*' + # r'\|--\+0-\| \| \|-0\+--\| \| B B iiii g g\s*' + # r'\|__:__\|_____\|_____\|__:__\|_____\| B B i g g\s*' + # r'\| \| : \| : \| \| \| B BBBB i g g\s*' + # r'\| \|-0\+--\|-0\+--\| \| \| B iiiii gggggg\s*' + # r'\|_____\|__:__\|__:__\|_____\|_____\|__BBBBB\s*' + # r'\| \| \| \| : \| \| TTTTTTTTT\s*' + # r'\| \| \| \|--\+0-\| \| DDDDDD FFFFF T\s*' + # r'\|_____\|_____\|_____\|__:__\|_____\| D D F TTTT T\s*' + # r'\| \| \| \| : \| \|D D F T T\s*' + # r'\| \| \| \|--\+0-\| \|D D FFFF T T\s*' + # r'\|_____\|_____\|_____\|__:__\|_____\|D___ D F T T\s*' + # r'\| \| \| : \| \| \|D D F TTTTT\s*' + # r'\| \| \|--\+0-\| \| \| D D F T T\s*' + # r'\|_____\|_____\|__:__\|_____\|_____\| D F T T\s*' + # r'\| \| \| \| \| \| D T T\s*' + # r'\| \| \| \| \| \| DDDDDD F TTTT\s*' + # r'\|_____\|_____\|_____\|_____\|_____\|______ www\.bigdft\.org' + ) + ), + LegacyParser( + name='parsers/wien2k', code_name='WIEN2k', code_homepage='http://www.wien2k.at/', + parser_class_name='wien2kparser.Wien2kParser', + mainfile_contents_re=r'\s*---------\s*:ITE[0-9]+:\s*[0-9]+\.\s*ITERATION\s*---------' + ), + LegacyParser( + name='parsers/band', code_name='BAND', code_homepage='https://www.scm.com/product/band_periodicdft/', + parser_class_name='bandparser.BANDParser', + mainfile_contents_re=r' +\* +Amsterdam Density Functional +\(ADF\)'), + LegacyParser( + name='parsers/gaussian', code_name='Gaussian', code_homepage='http://gaussian.com/', + parser_class_name='gaussianparser.GaussianParser', + mainfile_mime_re=r'.*', + mainfile_contents_re=( + r'\s*Cite this work as:' + r'\s*Gaussian [0-9]+, Revision [A-Za-z0-9\.]*,') + ), + LegacyParser( + name='parsers/quantumespresso', code_name='Quantum Espresso', code_homepage='https://www.quantum-espresso.org/', + parser_class_name='quantumespressoparser.QuantumEspressoParserPWSCF', + mainfile_contents_re=( + r'(Program PWSCF.*starts)|' + r'(Current dimensions of program PWSCF are)') + # r'^(.*\n)*' + # r'\s*Program (\S+)\s+v\.(\S+)(?:\s+\(svn\s+rev\.\s+' + # r'(\d+)\s*\))?\s+starts[^\n]+' + # r'(?:\s*\n?)*This program is part of the open-source Quantum') + ), + LegacyParser( + name='parsers/abinit', code_name='ABINIT', code_homepage='https://www.abinit.org/', + parser_class_name='abinitparser.AbinitParser', + mainfile_contents_re=(r'^\n*\.Version\s*[0-9.]*\s*of ABINIT\s*') + ), + LegacyParser( + name='parsers/orca', code_name='ORCA', code_homepage='https://orcaforum.kofo.mpg.de/', + parser_class_name='orcaparser.OrcaParser', + mainfile_contents_re=( + r'\s+\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\**\s*' + r'\s+\* O R C A \*\s*' + r'\s+\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\**\s*' + r'\s*' + r'\s*--- An Ab Initio, DFT and Semiempirical electronic structure package ---\s*') + ), + LegacyParser( + name='parsers/castep', code_name='CASTEP', code_homepage='http://www.castep.org/', + parser_class_name='castepparser.CastepParser', + mainfile_contents_re=(r'\s\|\s*CCC\s*AA\s*SSS\s*TTTTT\s*EEEEE\s*PPPP\s*\|\s*') + ), + LegacyParser( + name='parsers/dl-poly', code_name='DL_POLY', code_homepage='https://www.scd.stfc.ac.uk/Pages/DL_POLY.aspx', + parser_class_name='dlpolyparser.DlPolyParserWrapper', + mainfile_contents_re=(r'\*\* DL_POLY \*\*') + ), + LegacyParser( + name='parsers/lib-atoms', code_name='libAtoms', code_homepage='https://libatoms.github.io/', + parser_class_name='libatomsparser.LibAtomsParserWrapper', + mainfile_contents_re=(r'\s*', e.g. 'parsers/vasp'. ''' + +# renamed parsers +parser_dict['parser/broken'] = parser_dict['parsers/broken'] +parser_dict['parser/fleur'] = parser_dict['parsers/fleur'] +parser_dict['parser/molcas'] = parser_dict['parsers/molcas'] +parser_dict['parser/octopus'] = parser_dict['parsers/octopus'] +parser_dict['parser/onetep'] = parser_dict['parsers/onetep'] + +# register code names as possible statistic value to the dft datamodel +code_names = sorted( + set([ + getattr(parser, 'code_name') + for parser in parsers + if parser.domain == 'dft' and getattr(parser, 'code_name', None) is not None and getattr(parser, 'code_name') != 'currupted mainfile']), + key=lambda code_name: code_name.lower()) +datamodel.DFTMetadata.code_name.a_search.statistic_values = code_names + [config.services.unavailable_value, config.services.not_processed_value] diff --git a/nomad/processing/data.py b/nomad/processing/data.py index 2e0785cc36112d7b7f73964fdf18bf61112d9928..bbd5a2b61bd5d1736abd17e29839a52ac4cf3041 100644 --- a/nomad/processing/data.py +++ b/nomad/processing/data.py @@ -38,7 +38,8 @@ from structlog.processors import StackInfoRenderer, format_exc_info, TimeStamper from nomad import utils, config, infrastructure, search, datamodel from nomad.files import PathObject, UploadFiles, ExtractError, ArchiveBasedStagingUploadFiles, PublicUploadFiles, StagingUploadFiles from nomad.processing.base import Proc, process, task, PENDING, SUCCESS, FAILURE -from nomad.parsing import parser_dict, match_parser, Backend +from nomad.parsing import Backend +from nomad.parsing.parsers import parser_dict, match_parser from nomad.normalizing import normalizers from nomad.datamodel import EntryArchive from nomad.archive import query_archive @@ -456,7 +457,10 @@ class Calc(Proc): self._entry_metadata.dft.update_group_hash() except Exception as e: logger.error("Could not retrieve method information for phonon calculation.", exception=e) + if self._entry_metadata.encyclopedia is None: + self._entry_metadata.encyclopedia = EncyclopediaMetadata() self._entry_metadata.encyclopedia.status = EncyclopediaMetadata.status.type.failure + finally: # persist the calc metadata with utils.timer(logger, 'saved calc metadata', step='metadata'): @@ -1036,8 +1040,8 @@ class Upload(Proc): modified_upload = self._get_collection().find_one_and_update( {'_id': self.upload_id, 'joined': {'$ne': True}}, {'$set': {'joined': True}}) - if modified_upload is not None: - self.get_logger().debug('join') + if modified_upload is None or modified_upload['joined'] is False: + self.get_logger().info('join') # Before cleaning up, run an additional normalizer on phonon # calculations. TODO: This should be replaced by a more diff --git a/ops/containers/keycloak/material_theme/account/theme.properties b/ops/containers/keycloak/material_theme/account/theme.properties index 8293dd1e46f259524ac7b26112b5864f0012590b..6ec78a16c811f1d05f45279935be964b4adc287d 100644 --- a/ops/containers/keycloak/material_theme/account/theme.properties +++ b/ops/containers/keycloak/material_theme/account/theme.properties @@ -1,5 +1,5 @@ parent=base styles=css/material-components-web.min.css css/bootstrap-material-design-alerts.css css/material-keycloak-theme.css scripts=js/polyfill/nodelist-foreach.js js/material-components-web.min.js js/material-keycloak-theme.js -kcLogoLink=https://nomad-coe.eu/ +kcLogoLink=https://nomad-lab.eu/ diff --git a/ops/containers/keycloak/material_theme/login/theme.properties b/ops/containers/keycloak/material_theme/login/theme.properties index 8293dd1e46f259524ac7b26112b5864f0012590b..6ec78a16c811f1d05f45279935be964b4adc287d 100644 --- a/ops/containers/keycloak/material_theme/login/theme.properties +++ b/ops/containers/keycloak/material_theme/login/theme.properties @@ -1,5 +1,5 @@ parent=base styles=css/material-components-web.min.css css/bootstrap-material-design-alerts.css css/material-keycloak-theme.css scripts=js/polyfill/nodelist-foreach.js js/material-components-web.min.js js/material-keycloak-theme.js -kcLogoLink=https://nomad-coe.eu/ +kcLogoLink=https://nomad-lab.eu/ diff --git a/ops/docker-compose/nomad-oasis/README.md b/ops/docker-compose/nomad-oasis/README.md index cd98b90434041fe7841edbc49c8ba26729ca3c8f..2a375787c81bee119ba78a42077b922e464064a1 100644 --- a/ops/docker-compose/nomad-oasis/README.md +++ b/ops/docker-compose/nomad-oasis/README.md @@ -203,7 +203,7 @@ The GUI also has a config file, called `env.js` with a similar function than `no ```js window.nomadEnv = { 'appBase': '/nomad-oasis/', - 'keycloakBase': 'https://repository.nomad-coe.eu/fairdi/keycloak/auth/', + 'keycloakBase': 'https://nomad-lab.eu/fairdi/keycloak/auth/', 'keycloakRealm': 'fairdi_nomad_prod', 'keycloakClientId': 'nomad_public', 'debug': false, @@ -352,7 +352,7 @@ Will will probably provide functionality in the API of the central NOMAD to uplo ### How to maintain an Oasis installation? #### How to install a NOMAD Oasis? -Follow our guide: https://repository.nomad-coe.eu/app/docs/ops.html#operating-a-nomad-oasis +Follow our guide: https://nomad-lab.eu/prod/rae/docs/ops.html#operating-a-nomad-oasis #### How do version numbers work? There are still a lot of thing in NOMAD that are subject to change. Currently, changes in the minor version number (0.x.0) designate major changes that require data migration. Changes in the patch version number (0.7.x) just contain minor changes and fixes and do not require data migration. Once we reach 1.0.0, NOMAD will use the regular semantic versioning conventions. diff --git a/ops/docker-compose/nomad-oasis/env.js b/ops/docker-compose/nomad-oasis/env.js index c3b12dfa668e4f08ededf5e5bac372ec9b2253b4..d4fd251fd81d3535a9e280c7fbfd6bd8645b41f1 100644 --- a/ops/docker-compose/nomad-oasis/env.js +++ b/ops/docker-compose/nomad-oasis/env.js @@ -1,6 +1,6 @@ window.nomadEnv = { 'appBase': '/nomad-oasis/', - 'keycloakBase': 'https://repository.nomad-coe.eu/fairdi/keycloak/auth/', + 'keycloakBase': 'https://nomad-lab.eu/fairdi/keycloak/auth/', 'keycloakRealm': 'fairdi_nomad_prod', 'keycloakClientId': 'nomad_public', 'debug': false, diff --git a/ops/elasticsearch_settings.http b/ops/elasticsearch_settings.http new file mode 100644 index 0000000000000000000000000000000000000000..6f7b020337b39e64b7073c8746d4fdab19a47d38 --- /dev/null +++ b/ops/elasticsearch_settings.http @@ -0,0 +1,8 @@ +### Make elasticsearch indixes writable, after storage run low and watermark check +# automatically set all indexes into read_only_allow_delete=true +PUT http://localhost:19202/_all/_settings HTTP/1.1 +Content-Type: application/json + +{ + "index.blocks.read_only_allow_delete": null +} \ No newline at end of file diff --git a/ops/helm/nomad/Chart.yaml b/ops/helm/nomad/Chart.yaml index c044aba4224c5960c882e9d95fc144d70656f351..2349446cbdc8a2959c7a1e1560091511cdc5bf98 100644 --- a/ops/helm/nomad/Chart.yaml +++ b/ops/helm/nomad/Chart.yaml @@ -1,5 +1,5 @@ apiVersion: v1 -appVersion: "0.8.1" +appVersion: "0.8.3" description: A Helm chart for Kubernetes that only runs nomad services and uses externally hosted databases. name: nomad -version: 0.8.2 +version: 0.8.3 diff --git a/ops/helm/nomad/ci-dev-values.yaml b/ops/helm/nomad/ci-dev-values.yaml index 69654013ef508aa251b0bf70295ec3f50469632a..24e28cacf63fbb266f8f438e87523eceb43fc9d4 100644 --- a/ops/helm/nomad/ci-dev-values.yaml +++ b/ops/helm/nomad/ci-dev-values.yaml @@ -27,7 +27,7 @@ logstash: dbname: nomad_dev_v0_8 keycloak: - serverUrl: "https://repository.nomad-coe.eu/fairdi/keycloak/auth/" + serverUrl: "https://nomad-lab.eu/fairdi/keycloak/auth/" passwordSecret: 'nomad-keycloak-password' realmName: 'fairdi_nomad_prod' clientId: 'nomad_public' diff --git a/ops/helm/nomad/values.yaml b/ops/helm/nomad/values.yaml index f6909d6720145e957776faa14339466225aa571b..56bea48ce5e92df6e6cd8dd9a877b39814416187 100644 --- a/ops/helm/nomad/values.yaml +++ b/ops/helm/nomad/values.yaml @@ -1,9 +1,9 @@ ## Default values for nomad@FAIRDI version: - label: "0.8.2" + label: "0.8.3" isBeta: false usesBetaData: false - officialUrl: "https://repository.nomad-coe.eu/app/gui" + officialUrl: "https://nomad-lab.eu/prod/rae/gui" ## Everything concerning the container images to be used image: @@ -73,7 +73,7 @@ gui: ## This variable is used in the GUI to show or hide additional information debug: false ## URL for matomo(piwik) user tracking - matomoUrl: 'https://repository.nomad-coe.eu/fairdi/stat' + matomoUrl: 'https://nomad-lab.eu/prod/stat' ## site id for matomo(piwik) user tracking matomoSiteId: 1 ## send matomo(piwik) user tracking data @@ -140,8 +140,8 @@ client: username: admin keycloak: - serverExternalUrl: "https://repository.nomad-coe.eu/fairdi/keycloak/auth/" - serverUrl: "https://repository.nomad-coe.eu/fairdi/keycloak/auth/" + serverExternalUrl: "https://nomad-lab.eu/fairdi/keycloak/auth/" + serverUrl: "https://nomad-lab.eu/keycloak/auth/" realmName: "fairdi_nomad_test" username: "admin" clientId: "nomad_public" diff --git a/tests/parser_measurement.py b/tests/parser_measurement.py index e0cec2adac49c8dcca22cb1125ec0d590effa8c8..bc5905bb1cb2a39035889f2a24c21df7abd1c8a1 100644 --- a/tests/parser_measurement.py +++ b/tests/parser_measurement.py @@ -3,7 +3,7 @@ if __name__ == '__main__': import logging import time from nomad import config, utils - from nomad.parsing import parser_dict + from nomad.parsing.parsers import parser_dict from nomad.cli.parse import normalize_all from nomad.metainfo.legacy import LegacyMetainfoEnvironment from nomad.parsing.legacy import Backend diff --git a/tests/test_datamodel.py b/tests/test_datamodel.py index 54d46db39997c2a4ead0b5a2b9bb81e94dde7562..d60de974fe57b47cadd7bbc8fa8b5c35fda86e25 100644 --- a/tests/test_datamodel.py +++ b/tests/test_datamodel.py @@ -22,7 +22,8 @@ import datetime from ase.data import chemical_symbols from ase.spacegroup import Spacegroup -from nomad import datamodel, parsing, utils, files +from nomad import datamodel, utils, files +from nomad.parsing.parsers import parser_dict number_of = 20 @@ -37,7 +38,7 @@ systems = ['atom', 'molecule/cluster', '2D/surface', 'bulk'] comments = [gen.sentence() for _ in range(0, number_of)] references = [(i + 1, gen.url()) for i in range(0, number_of)] datasets = [(i + 1, gen.slug()) for i in range(0, number_of)] -codes = list(set([parser.code_name for parser in parsing.parser_dict.values() if hasattr(parser, 'code_name')])) # type: ignore +codes = list(set([parser.code_name for parser in parser_dict.values() if hasattr(parser, 'code_name')])) # type: ignore filepaths = ['/'.join(gen.url().split('/')[3:]) for _ in range(0, number_of)] low_numbers_for_atoms = [1, 1, 2, 2, 2, 2, 2, 3, 3, 4] diff --git a/tests/test_parsing.py b/tests/test_parsing.py index 6f1271a647f72055729db7ae4a208e54da11d23d..d0b88a96b0df3cafd15b01f596c382c5b3c1244d 100644 --- a/tests/test_parsing.py +++ b/tests/test_parsing.py @@ -20,7 +20,8 @@ import os from shutil import copyfile from nomad import utils, files, datamodel -from nomad.parsing import parser_dict, match_parser, BrokenParser, BadContextUri, Backend +from nomad.parsing import BrokenParser, BadContextUri, Backend +from nomad.parsing.parsers import parser_dict, match_parser from nomad.app import dump_json from nomad.metainfo import MSection