nomad-FAIR issueshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues2021-06-17T17:15:14Zhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/421Improvements to text_parser2021-06-17T17:15:14ZMarkus ScheidgenImprovements to text_parserThe usual stuff, mandatory:
- [x] documentation (including simple example)
- [x] proper tests (you can turn `test_parsing.py` into a directory `parsing` with the old `test_parsing.py` and new `test_text_parser.py`
- [x] add type annotati...The usual stuff, mandatory:
- [x] documentation (including simple example)
- [x] proper tests (you can turn `test_parsing.py` into a directory `parsing` with the old `test_parsing.py` and new `test_text_parser.py`
- [x] add type annotations to all public methods
- [ ] catch all matches to build a map of what has been extracted from the file and provide parser "telemetry"/statistics. In the old framework, we had an ANSI colored version of the parsed file. This was super helpful for debugging. This is also something we could show in the GUI, so you can see what was covered.
More specific suggestions:
- [x] it should be possible to initialise `Quantity` from a metainfo quantity and get name, type, shape, etc. from there
- [x] there should be automatic conversion from the unit in the text file to the unit attached to the metainfo Quantity via pint; the unit conversion method should not be called to_si, but to_metainfo_unit or something.
- [x] since Quantity is a class, str_operation should be provided via inheritance? Or at least there should be the option to also overwrite it in a subclass?
- [x] I do not like the idea of storing the values in Quantity. That introduces a very tight coupling between the two classes; makes it harder to clear out values between parser runs; what if someone gets the idea to use the same quantity in different parsers. The values should be stored in a separate mapping in UnstructuredTextFileParser. Quantity should only be providing the pattern and convert functions.
- [x] If you do this dynamic parsing thing, where you only parse if someone asks for a quantity, why not go all the way and only parse for that quantity?
- [x] You could also add __getattr__/__getattribute__ to allow more convenient `parser.quantity` style access.
- [ ] You could have an additional function that allows to pass a section. This function could automatically look and try parsing for all quantities in that section's definition and assign values accordingly.v0.10.0Alvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/503A reimplementation of the upload API with fastapi2021-06-17T15:59:48ZMarkus ScheidgenA reimplementation of the upload API with fastapiThis is related to #408 and https://github.com/nomad-coe/nomad/issues/1
| |endpoint|description|in|out|
|-|-|-|-|-|
|GET |uploads|retrieve list of uploads|owner, modified since, pagination, processing filters, required|upload metadata...This is related to #408 and https://github.com/nomad-coe/nomad/issues/1
| |endpoint|description|in|out|
|-|-|-|-|-|
|GET |uploads|retrieve list of uploads|owner, modified since, pagination, processing filters, required|upload metadata (JSON)|
|GET |uploads/`id`|retrieve upload metadata|required|upload metadata (JSON)|
|GET |uploads/`id`/entries|retrieve list of upload entries|pagination, modified since, processing filters, required|entry processing metadata (JSON)|
|GET |uploads/`id`/entries/`id`|retrieve entry|required|entry processing metadata (JSON)|
|GET |uploads/`id`/mirror|stream upload and entry mongodb records|pagination, modified since|new line separated JSON docs|
|POST |upload|created upload|name, auth|upload metadata (JSON)|
|POST |upload/`id`/raw/upload|upload and overwrite raw files|file archive|human readable acknowledgement or JSON depending on header|
|GET |upload/`id`/raw/download/`path`|stream recursive contents as .zip|-|* or .zip depending on path|le archive|human readable acknowledgement or JSON depending on header|
|POST |upload/`id`/raw/`path`|upload and overwrite a specific file, dirs are created|a file|human readable acknowledgement or JSON depending on header|
|GET |upload/`id`/raw/`path`|retrieve directory contents or file||(JSON or HTML depending on header) or file depending on path (think HTTP file-server)|
|POST |upload/`id`/action|trigger an action|reprocess, publish|upload metadata (JSON)|
|POST |upload/`id`|edit upload metadata|name|upload metadata (JSON)|
|DELETE|upload/`id`|delete||upload metadata (JSON)|
- implement this in `nomad.app.v1.routers.uploads`
- mimic/use the models and principles in `nomad.app.v1.routers.entries`
- the major differences with `nomad.app.v1.routers.entries` are: its about the processing data (`nomad.processing.*`), not the entries metadata (`nomad.datamodel.*`). Respectively, it's about using mongo and not elasticsearch.
- start "duplicating" the functionality of the existing API `nomad.app.flask.api.upload.py`
- after this `mirror` should get priority (data synchronization with NOMAD Oasis)
- returning un-modeled JSON-fied version of the mongoengine objects (`nomad.processing.*`) should be enough in the beginning, only use modeling for the outer shell (pagination, etc.)
- apply excessive testing, one method per endpoint, lots of parametrisation, reuse example data from entries testDavid SikterDavid Sikterhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/545Unify `entry/raw` and `upload/raw` v1 API endpoints2021-06-08T07:57:38ZMarkus ScheidgenUnify `entry/raw` and `upload/raw` v1 API endpointsthe `entries/id/raw/*` endpoints should work like the `uploads/id/raw/path` endpoint. Only difference is that entries need a search first to determine access rights. For uploads this is determined by published/owner.the `entries/id/raw/*` endpoints should work like the `uploads/id/raw/path` endpoint. Only difference is that entries need a search first to determine access rights. For uploads this is determined by published/owner.David SikterDavid Sikterhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/543Unify download stream generator2021-05-19T07:23:29ZDavid SikterUnify download stream generatorA single, unified generator for download streams should be possible, with roughly the following input arguments:
- a generator returning pairs (upload_id, path)
- files params (compress flag and pattern for filtering by filename)
- inclu...A single, unified generator for download streams should be possible, with roughly the following input arguments:
- a generator returning pairs (upload_id, path)
- files params (compress flag and pattern for filtering by filename)
- include_subdirs (recursive include subdir content)
- single_raw_file (if we should just stream one file, without encapsulating it in a zip bundle)
The generator also needs to handle caching and orderly closing.David SikterDavid Sikterhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/454A fastapi re-implementation of the "encyclopedia" API2021-05-06T08:51:53ZMarkus ScheidgenA fastapi re-implementation of the "encyclopedia" APIYou could branch of the `fastapi`-branch and start on a "encyclopedia" API. I would suggest to add `materials.py` to `nomad/app_fastapi/routers/`.
We should do a few renames where applicable:
- encyclopedia -> material(s)
- calculation(...You could branch of the `fastapi`-branch and start on a "encyclopedia" API. I would suggest to add `materials.py` to `nomad/app_fastapi/routers/`.
We should do a few renames where applicable:
- encyclopedia -> material(s)
- calculation(s) -> entr(y|ies)
A few guidelines
- not just GUI, but user friendly (think curl and browser, not python)
- GET for everything, even if less powerful than a potentially necessary POST version (i.e. something for users, something for GUI)
- we GET and POST resources in HTTP, e.g. use POST materials/query and not POST materials (as we do not post a material, but a query for materials)
- nothing that requires substantial URL encoding, e.g. no JSON in query params or arrays in path.
- full documentation
- the responses echo the normalized request + data
- for `entries.py` I designed a few models around common aspects like, pagination or required properties. Those should be reused (and refactored if necessary)
Here are a few operation suggestions
- `GET materials/`
- `POST materials/query`; The body could be designed to also allow to retrieve entries from multiple material ids and potentially multiple given entry ids, multiple specified properties, etc.
- `GET materials/<id>/entries`
- `POST materials/<id>/entries/query`; The body could be designed to also allow to retrieve data for multiple entry ids, we could also skip this and force the use of `POST materials/query` for any complex request.
- `GET materials/<id>/entries/<id>`
- `POST materials/<id>/entries/<id>/query`; Not sure if necessary, as required properties usually go well in query params.
Here are a few examples representing simplest most minimalistic operation set that should suffice:
- `GET materials?owner=public&q=atoms__H&q=atoms__O&include=material_name&include_material_id&include=formula` (like `GET entries`)
- `GET materials/<id>/entries?include=entry_id&include=mainfile`
- `GET materials/<id>/entries/<id>?include=electronic_dos&include=electronic_band_structure`
- `POST materials/query`; The body covers all more specialised aspects, e.g. aggregation, statistics, etc. (similar to `POST entries/query`)
The `POST materials/query` body could look like this:
```json
{
"query": { ... }, # or
"ids": [ ... ],
"pagination": { ... }
"required": { ... }
"entries": {
"ids": [ ... ], # or
"query": { ... },
"pagination": { ... }
"required": { ... }
}
}
```
And could return materials aggregated entries:
```json
{
...
"data": [
{
"material_id": ...,
...
"entries": [
{
"calc_id": ...,
...
}
]
}
]
}
```
The entries in `GET /entries` and the entries in `GET /materials/<id>/entries` should be the same. Currently the repo entries and encyclopedia entries are different. Ideally both operations offer the exact same interface; the only different being that the material one adds another search criteria.https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/408Rewrite/refactor of our API with FastAPI2021-05-06T08:47:28ZMarkus ScheidgenRewrite/refactor of our API with FastAPI
## Motivation
### Flask + REST Plus is bad
- outdated and not really maintained
- cumbersome, restrictive REST Plus specific request/response models
- argparser (Flask) and models (REST Plus) are unrelated
- no manipulation of generat...
## Motivation
### Flask + REST Plus is bad
- outdated and not really maintained
- cumbersome, restrictive REST Plus specific request/response models
- argparser (Flask) and models (REST Plus) are unrelated
- no manipulation of generated OpenAPI spec
- optimade Python tools are not based on Flask REST Plus
### Fast API seems to solve a lot of problems
- models based on type annotations (pydantic)
- object oriented models
- query parameter parsing from function parameter annotations
- based on more modern uvicorn; prod==dev server
### our API is messy
- lots of inconsistencies between repo/archive/raw/... related to queries, pagination, etc.
- lots of doubled CnC code pieces
- lots of functionality/parameter crammed into GET requests, e.g. `/repo/`
- *grown* tests
## Operations
Before we look at API operations (Fast API and Open API call the basic API building blocks *operations*),
we should look at our datamodel it's core *entities* and principle *views* on data.
### NOMAD entities
- (info)
- users
- uploads
- entries
- datasets
- materials
- (metainfo)
#### views
- metadata (repo)
- archive
- raw
- encyclopedia, currently a mix of metadata and archive
- processing/upload (should be just part of metadata)
In contrast to the old API, we structure the API based on entities. The *metadata* view is the default, and there is access to the other views via subpaths `raw`, `archive`.
#### models
Many operations use the same input and output structures over and over again:
- Query
- Pagination, includes response size, order, aggregation
- (Metadata|Archive)Required, a list or structure of things to include in return
- Statistics, additional histogram information on metadata results
- FileFiler, for file downloads
- Models for our entities: User, UserMetadata, Dataset, Entry, Upload
#### operations
|http |path |input |output |
|- |- |- |- |
|GET |`/entries` |SimpleQuery, Pagination, MetadataRequired |Metadata* |
|POST |`/entries/edit` |Query, UserMetadata, EditActions |Edit or EditActions |
|POST |`/entries/query` |Query, Statistics, Pagination, MetadataRequired, |Metadata*, Stat. |
|GET |`/entries/<id>` |MetadataRequired |Metadata |
|GET |`/entries/raw` |SimpleQuery, FileFilter |.zip+manifest |
|POST |`/entries/raw` |Query, FileFilter |.zip+manifest |
|GET |`/entries/<id>/raw` |FileFilter |.zip+manifest |
|GET |`/entries/<id>/raw/<path>` | |file |
|GET |`/entries/archive` |SimpleQuery, Pagination |Archive*|.zip+manifest |
|POST |`/entries/archive` |Query, ArchiveRequired, Pagination |Archive*|.zip+manifest |
|GET |`/entries/<id>/archive` |ArchiveRequired |Archive*|.json |
|GET |`/entries/<id>/archive/<path>` | |Archive*|json-value |
|GET |`/uploads` |SimpleQuery, Pagination, MetadataRequired |(Upload, Metadata*)* |
|POST |`/uploads/query` |Query, Statistics, Pagination, MetadataRequired |(Upload, Metadata*)* |
|PUT |`/uploads` |file |Upload |
|GET |`/uploads/<id>` |Pagination, MetadataRequired |Upload, Metadata* |
|POST |`/uploads/<id>` |Upload+Actions |Upload |
|GET |`/uploads/<id>/query` |Query, Statistics, Pagination, MetadataRequired |Upload, Metadata* |
|DELETE |`/uploads/<id>` | |Upload |
|GET |`/uploads/<id>/raw` |FileFilter |.zip+manifest |
|GET |`/uploads/<id>/<glob>` |FileFilter |.zip+manifest |
|GET |`/uploads/<id>/<path>` | |file |
|GET |`/datasets` |SimpleQuery, Pagination, MetadataRequired |(Dataset, Metadata*)* |
|POST |`/datasets/query` |Query, Statistics, Pagination, MetadataRequired |(Dataset, Metadata*)* |
|PUT |`/datasets` |Dataset |Dataset |
|GET |`/datasets/<id>` |SimpleQuery, Pagination, MetadataRequired |Dataset, Metadata* |
|GET |`/datasets/<id>/query` |Query, Statistics, Pagination, MetadataRequired |Dataset, Metadata* |
|POST |`/datasets/<id>` |Dataset+Actions |Dataset |
|DELETE |`/datasets/<id>` | |Dataset |
|GET |`/material` |SimpleQuery, Pagination, MetadataRequired |(Material, Metadata*)* |
|POST |`/material/query` |Query, Statistics, Pagination, MetadataRequired |(Material, Metadata*)* |
|GET |`/material/<id>` |SimpleQuery, Pagination, MetadataRequired |Material, Metadata* |
|GET |`/material/<id>/query` |Query, Statistics, Pagination, MetadataRequired |Material, Metadata* |
|GET |`/materials/suggestions` |? |Suggestion* |
|GET |`/groups` |SimpleQuery, Pagination, MetadataRequired |(Group, Metadata*)* |
|POST |`/groups/query` |Query, Statistics, Pagination, MetadataRequired |(Group, Metadata*)* |
|GET |`/groups/<id>` |SimpleQuery, Pagination, MetadataRequired |Group, Metadata* |
|GET |`/groups/<id>/query` |Query, Statistics, Pagination, MetadataRequired |Group, Metadata* |
|GET |`/group/<id>/query` |Query, Statistics, Pagination, MetadataRequired |Group, Metadata* |
|GET |`/users` |Pagination |User* |
|GET |`/users/me` | |User, Auth |
|GET |`/users/<id>` | |User |
|GET |`/info` | |Info |
|GET |`/metainfo` |MetainfoQuery |Archive* |
|GET |`/metainfo/<path>` | |Archive or json-value |
## Tasks
- [x] learn fast api and build skeleton API with a few model and static example response
- [x] think of a documentation scheme: swagger, reference, tutorials
- [x] restructure the tests based on the skeleton (test first)
- [ ] move the implementation step by step (search, upload, edit, archive, raw, metainfo, ...)
- [ ] implement everything
Ideally we would already have end-to-end GUI tests that use the API for "real"v0.10.0https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/517Improving the overview page with section results2021-05-04T05:50:13ZMarkus ScheidgenImproving the overview page with section resultsIn non trivial cases (e.g. long geometry optimisation and many sites in the system), the load times for the overview page (v0.10.1) can be quite long. For example here: https://nomad-lab.eu/prod/rae/beta/gui/entry/id/10k9kKRNTiWFsaK56Ft3...In non trivial cases (e.g. long geometry optimisation and many sites in the system), the load times for the overview page (v0.10.1) can be quite long. For example here: https://nomad-lab.eu/prod/rae/beta/gui/entry/id/10k9kKRNTiWFsaK56Ft3og/xUjNvzui8NTYPBhdxshs5QukaRms/overview
The cause is loading a large archive of several MB. With the new section results, we can reduce this by only partially loading some of the archive. This is more test driving the section results and try its viability before investing more time. The overview page is a mimic for the "average" API user that is just interested in the results.
- [x] The entry page should download the entry data using API v1, which will contain the old `section_metadata` and if available, also the new `section_results`. The GUI should primarily use section_results as the information source, but will also have a fallback of using the information in `section_metadata` to populate the GUI.
- [x] If `section_results` is available for the entry, the overview page should use the new partial archive requests to only download `section_results` from the archive (incl. the referenced systems, properties, etc). This will improve the page loading speed. This is not just networking, the API also will only load the relevant sections from the disk on the server. Also in this case there should be a fallback that downloads the full archive and uses the old metadata to populate the GUI.
- [x] The overview page uses the available properties (provided as part of the repo metadata) to pre-render all properties (not just the material).
- [x] We add a geometry opt. normalizer that compresses the convergence into a numpy array quantity. This should be saved in section geometry optimisation.
- [x] We produce a section results with the new normalizer. The geometry optimisation convergence can be added as a property. We can do this without the new indices.
- [x] We remove the structure view from optimization component. It was rather gimmicky and doesn't warrant to load all the systems.Lauri HimanenLauri Himanenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/474Mover parsers to GitHub2021-03-30T14:05:27ZMarkus ScheidgenMover parsers to GitHubThis should probably done via script. If we keep the script, we might use it for future updates.
- skip if already on github
- ensure that a proper license file exists in the repo
- replace the copyright headers in all source files
- a ...This should probably done via script. If we keep the script, we might use it for future updates.
- skip if already on github
- ensure that a proper license file exists in the repo
- replace the copyright headers in all source files
- a minimal github actions pipeline for minimal test and (commented out) linting
- create a repo on GitHub with name, description, license ... otherwise start with defaults
- push the repo with all branches/tags
- change the mother project submodulev0.10.0Markus ScheidgenMarkus Scheidgenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/461Optimade based on optimade-python-tools2021-03-02T11:38:01ZMarkus ScheidgenOptimade based on optimade-python-toolsIf we have fastapi available in deployments #408, we should replace "our" optimade implementation with the official optimade-python-tools. This will require to add more ES support to optimade-python-tools.If we have fastapi available in deployments #408, we should replace "our" optimade implementation with the official optimade-python-tools. This will require to add more ES support to optimade-python-tools.v0.10.0Markus ScheidgenMarkus Scheidgenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/466Configurable file name part for archive files to allow multiple archive versions2021-03-02T11:38:01ZMarkus ScheidgenConfigurable file name part for archive files to allow multiple archive versionsMarkus ScheidgenMarkus Scheidgenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/489VASP parser INCAR2021-02-02T12:58:44ZMarkus ScheidgenVASP parser INCARThe new parser catches quite some INCAR parameters more, which causes a lot of warnings. At the moment, this is a problem, due to the high amounts of warnings that will spam our logging system.
Due to the amount of possible INCAR param...The new parser catches quite some INCAR parameters more, which causes a lot of warnings. At the moment, this is a problem, due to the high amounts of warnings that will spam our logging system.
Due to the amount of possible INCAR parameters, the fact that they are all defined in two flavours ("in" user set, "inOut" set by VASP), and that they are often not well documented by VASP itself, I don't think they should be covered by individual metainfo definitions.
There are two solutions. Either, keep track of the missing definitions during parsing and then log once at the end with a list of keys. Or, we finally add dict support to the metainfo and store all INCAR parameters in one or two (in, inOut) dictionaries without specific definitions for each INCAR parameter.v0.10.0Alvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/483Wrong log usage in new parsers2021-02-02T10:28:03ZMarkus ScheidgenWrong log usage in new parsersI found several occasions with parametrised log messages. This is wrong. Log messages should almost always be constant strings. Be aware that these logs are only partially meant for humans. We also need them to be analysable in order to ...I found several occasions with parametrised log messages. This is wrong. Log messages should almost always be constant strings. Be aware that these logs are only partially meant for humans. We also need them to be analysable in order to manage the huge amounts logs that NOMAD produces.
Parameters should be passed as keyword arguments. And here, only those that are used for analyses and have certain significance should be added directly. For informative data you can use the `data` keyword argument to pass a dictionary with additional values. Make sure that these data are json serializable.
Something like this (from the new vasp parser):
```
self.outcar_parser.logger.warn('Cannot determine lm fields for n_lm=%d' % n_lm)
```
Should be:
```
self.outcar_parser.logger.warn('Cannot determine lm fields', data=dict(n_lm=n_lm))
```
This is a huge pain with the old parsers, as they spam our log database with information that we cannot really use well and that is hard to swallow for the underlying elasticsearch.
@ladinesa We should fix this for the new parsers. Please go through all your log entries and adjust them accordingly.
@himanel1 You should also be aware of thisAlvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/460Refactor (the use of) config.api_url2021-01-11T14:45:39ZMarkus ScheidgenRefactor (the use of) config.api_urlThis function has several issues.
- we sometimes need the GUI url
- we have multiple APIs
- we currently use the ssl parameter to distinguish internal/external use. But even external might be without ssl (e.g. on an OASIS)This function has several issues.
- we sometimes need the GUI url
- we have multiple APIs
- we currently use the ssl parameter to distinguish internal/external use. But even external might be without ssl (e.g. on an OASIS)Markus ScheidgenMarkus Scheidgenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/456Make dependence on MDTraj and MDAnalysis optional2021-01-11T13:26:25ZMarkus ScheidgenMake dependence on MDTraj and MDAnalysis optionalThese two dependencies are not so easy to install and cause problems on a lot of systems. As our Python package gets more users, I would like to make MDTraj and MDAnalyis as optional as possible.
They are currently only used by some of ...These two dependencies are not so easy to install and cause problems on a lot of systems. As our Python package gets more users, I would like to make MDTraj and MDAnalyis as optional as possible.
They are currently only used by some of our MDParsers. Particularly Gromacs is making NOMAD fail, because MDAnalysis gets imported when the parser list is compiled in `/nomad/parsing/parsers.py`. We need to make sure that imports are optional; either with try/catch or by being imported only on use. Ideally the parsers will react to missing MDAanalysis and disable a few features on parsing with a logged warning.v0.10.0Alvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/437Rewrite CRYSTAL parser with new parser framework2021-01-11T06:20:23ZLauri HimanenRewrite CRYSTAL parser with new parser framework- [x] Rewrite the main parser completely
- [x] Add support for geometry optimization, band structures and DOS
- [x] Restructure metainfo
- [x] Add regression tests directly to the parser repository
- [x] Test on a variety of production d...- [x] Rewrite the main parser completely
- [x] Add support for geometry optimization, band structures and DOS
- [x] Restructure metainfo
- [x] Add regression tests directly to the parser repository
- [x] Test on a variety of production data in the development clusterLauri HimanenLauri Himanenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/394Parser readmes2020-12-18T16:07:58ZMarkus ScheidgenParser readmesAll parsers should get a new consistent readme. @Temok is already preparing tables with filenames (#381).
- put the new readme in each parser
- make the GUI display the readme on the upload page via click on parser/code on upload page ...All parsers should get a new consistent readme. @Temok is already preparing tables with filenames (#381).
- put the new readme in each parser
- make the GUI display the readme on the upload page via click on parser/code on upload page and upload view
- test if developing parsers with pypi package works
related to #375, #381Cuauhtemoc Salazartemok@physik.hu-berlin.deCuauhtemoc Salazartemok@physik.hu-berlin.dehttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/396Phonopy Workflows2020-12-18T16:07:58ZMarkus ScheidgenPhonopy Workflows- Phonopy parser to be an actual Phonopy parser and not just an FHI-aim phonopy parser
- Phonopy parser to uses the new workflow system- Phonopy parser to be an actual Phonopy parser and not just an FHI-aim phonopy parser
- Phonopy parser to uses the new workflow systemAlvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/363VASP geometry optimizations as workflows2020-12-18T16:07:57ZMarkus ScheidgenVASP geometry optimizations as workflows- [x] metainfo for workflows
- [x] adapt VASP parser
- [ ] normalizer (system, dos, encyclopedia, ?) and section_metadata use the workflow result system/calculation
![image](/uploads/aaa6b494cf0afc03d96372e3a338c973/image.png)- [x] metainfo for workflows
- [x] adapt VASP parser
- [ ] normalizer (system, dos, encyclopedia, ?) and section_metadata use the workflow result system/calculation
![image](/uploads/aaa6b494cf0afc03d96372e3a338c973/image.png)Alvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/350Parallel archive access2020-12-18T16:07:57ZMarkus ScheidgenParallel archive accessThe performance when accessing large archive query result sets is poor. We identified this is mostly due to many small serial file system accesses to GPFS. The endpoint `nomad.app.api.archive.ArchiveQueryResource` should use threading to...The performance when accessing large archive query result sets is poor. We identified this is mostly due to many small serial file system accesses to GPFS. The endpoint `nomad.app.api.archive.ArchiveQueryResource` should use threading to read for many calcs simultaneously.Markus ScheidgenMarkus Scheidgenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/451Restructure the Shpinx documentation2020-12-15T12:37:21ZMarkus ScheidgenRestructure the Shpinx documentationIt is just not intuitiveIt is just not intuitive