nomad-FAIR issueshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues2021-03-02T11:38:01Zhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/461Optimade based on optimade-python-tools2021-03-02T11:38:01ZMarkus ScheidgenOptimade based on optimade-python-toolsIf we have fastapi available in deployments #408, we should replace "our" optimade implementation with the official optimade-python-tools. This will require to add more ES support to optimade-python-tools.If we have fastapi available in deployments #408, we should replace "our" optimade implementation with the official optimade-python-tools. This will require to add more ES support to optimade-python-tools.v0.10.0Markus ScheidgenMarkus Scheidgenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/460Refactor (the use of) config.api_url2021-01-11T14:45:39ZMarkus ScheidgenRefactor (the use of) config.api_urlThis function has several issues.
- we sometimes need the GUI url
- we have multiple APIs
- we currently use the ssl parameter to distinguish internal/external use. But even external might be without ssl (e.g. on an OASIS)This function has several issues.
- we sometimes need the GUI url
- we have multiple APIs
- we currently use the ssl parameter to distinguish internal/external use. But even external might be without ssl (e.g. on an OASIS)Markus ScheidgenMarkus Scheidgenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/456Make dependence on MDTraj and MDAnalysis optional2021-01-11T13:26:25ZMarkus ScheidgenMake dependence on MDTraj and MDAnalysis optionalThese two dependencies are not so easy to install and cause problems on a lot of systems. As our Python package gets more users, I would like to make MDTraj and MDAnalyis as optional as possible.
They are currently only used by some of ...These two dependencies are not so easy to install and cause problems on a lot of systems. As our Python package gets more users, I would like to make MDTraj and MDAnalyis as optional as possible.
They are currently only used by some of our MDParsers. Particularly Gromacs is making NOMAD fail, because MDAnalysis gets imported when the parser list is compiled in `/nomad/parsing/parsers.py`. We need to make sure that imports are optional; either with try/catch or by being imported only on use. Ideally the parsers will react to missing MDAanalysis and disable a few features on parsing with a logged warning.v0.10.0Alvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/454A fastapi re-implementation of the "encyclopedia" API2021-05-06T08:51:53ZMarkus ScheidgenA fastapi re-implementation of the "encyclopedia" APIYou could branch of the `fastapi`-branch and start on a "encyclopedia" API. I would suggest to add `materials.py` to `nomad/app_fastapi/routers/`.
We should do a few renames where applicable:
- encyclopedia -> material(s)
- calculation(...You could branch of the `fastapi`-branch and start on a "encyclopedia" API. I would suggest to add `materials.py` to `nomad/app_fastapi/routers/`.
We should do a few renames where applicable:
- encyclopedia -> material(s)
- calculation(s) -> entr(y|ies)
A few guidelines
- not just GUI, but user friendly (think curl and browser, not python)
- GET for everything, even if less powerful than a potentially necessary POST version (i.e. something for users, something for GUI)
- we GET and POST resources in HTTP, e.g. use POST materials/query and not POST materials (as we do not post a material, but a query for materials)
- nothing that requires substantial URL encoding, e.g. no JSON in query params or arrays in path.
- full documentation
- the responses echo the normalized request + data
- for `entries.py` I designed a few models around common aspects like, pagination or required properties. Those should be reused (and refactored if necessary)
Here are a few operation suggestions
- `GET materials/`
- `POST materials/query`; The body could be designed to also allow to retrieve entries from multiple material ids and potentially multiple given entry ids, multiple specified properties, etc.
- `GET materials/<id>/entries`
- `POST materials/<id>/entries/query`; The body could be designed to also allow to retrieve data for multiple entry ids, we could also skip this and force the use of `POST materials/query` for any complex request.
- `GET materials/<id>/entries/<id>`
- `POST materials/<id>/entries/<id>/query`; Not sure if necessary, as required properties usually go well in query params.
Here are a few examples representing simplest most minimalistic operation set that should suffice:
- `GET materials?owner=public&q=atoms__H&q=atoms__O&include=material_name&include_material_id&include=formula` (like `GET entries`)
- `GET materials/<id>/entries?include=entry_id&include=mainfile`
- `GET materials/<id>/entries/<id>?include=electronic_dos&include=electronic_band_structure`
- `POST materials/query`; The body covers all more specialised aspects, e.g. aggregation, statistics, etc. (similar to `POST entries/query`)
The `POST materials/query` body could look like this:
```json
{
"query": { ... }, # or
"ids": [ ... ],
"pagination": { ... }
"required": { ... }
"entries": {
"ids": [ ... ], # or
"query": { ... },
"pagination": { ... }
"required": { ... }
}
}
```
And could return materials aggregated entries:
```json
{
...
"data": [
{
"material_id": ...,
...
"entries": [
{
"calc_id": ...,
...
}
]
}
]
}
```
The entries in `GET /entries` and the entries in `GET /materials/<id>/entries` should be the same. Currently the repo entries and encyclopedia entries are different. Ideally both operations offer the exact same interface; the only different being that the material one adds another search criteria.https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/451Restructure the Shpinx documentation2020-12-15T12:37:21ZMarkus ScheidgenRestructure the Shpinx documentationIt is just not intuitiveIt is just not intuitivehttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/437Rewrite CRYSTAL parser with new parser framework2021-01-11T06:20:23ZLauri HimanenRewrite CRYSTAL parser with new parser framework- [x] Rewrite the main parser completely
- [x] Add support for geometry optimization, band structures and DOS
- [x] Restructure metainfo
- [x] Add regression tests directly to the parser repository
- [x] Test on a variety of production d...- [x] Rewrite the main parser completely
- [x] Add support for geometry optimization, band structures and DOS
- [x] Restructure metainfo
- [x] Add regression tests directly to the parser repository
- [x] Test on a variety of production data in the development clusterLauri HimanenLauri Himanenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/421Improvements to text_parser2021-06-17T17:15:14ZMarkus ScheidgenImprovements to text_parserThe usual stuff, mandatory:
- [x] documentation (including simple example)
- [x] proper tests (you can turn `test_parsing.py` into a directory `parsing` with the old `test_parsing.py` and new `test_text_parser.py`
- [x] add type annotati...The usual stuff, mandatory:
- [x] documentation (including simple example)
- [x] proper tests (you can turn `test_parsing.py` into a directory `parsing` with the old `test_parsing.py` and new `test_text_parser.py`
- [x] add type annotations to all public methods
- [ ] catch all matches to build a map of what has been extracted from the file and provide parser "telemetry"/statistics. In the old framework, we had an ANSI colored version of the parsed file. This was super helpful for debugging. This is also something we could show in the GUI, so you can see what was covered.
More specific suggestions:
- [x] it should be possible to initialise `Quantity` from a metainfo quantity and get name, type, shape, etc. from there
- [x] there should be automatic conversion from the unit in the text file to the unit attached to the metainfo Quantity via pint; the unit conversion method should not be called to_si, but to_metainfo_unit or something.
- [x] since Quantity is a class, str_operation should be provided via inheritance? Or at least there should be the option to also overwrite it in a subclass?
- [x] I do not like the idea of storing the values in Quantity. That introduces a very tight coupling between the two classes; makes it harder to clear out values between parser runs; what if someone gets the idea to use the same quantity in different parsers. The values should be stored in a separate mapping in UnstructuredTextFileParser. Quantity should only be providing the pattern and convert functions.
- [x] If you do this dynamic parsing thing, where you only parse if someone asks for a quantity, why not go all the way and only parse for that quantity?
- [x] You could also add __getattr__/__getattribute__ to allow more convenient `parser.quantity` style access.
- [ ] You could have an additional function that allows to pass a section. This function could automatically look and try parsing for all quantities in that section's definition and assign values accordingly.v0.10.0Alvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/419NOMAD Metainfo refactor2021-09-04T15:42:48ZMarkus ScheidgenNOMAD Metainfo refactorThere should be some coordinated effort to clean up the NOMAD Metainfo
- [ ] parser specific metainfo should really be parser specific ... there are some that miss the `x_...` prefix
- [x] only have one common code file/package for comp...There should be some coordinated effort to clean up the NOMAD Metainfo
- [ ] parser specific metainfo should really be parser specific ... there are some that miss the `x_...` prefix
- [x] only have one common code file/package for computational
- [x] have an extra package for workflows
- [x] refactor names, make use of aliases
- [x] get rid of sampling method and frame sequence
- [x] remove repeats were applicable
- [x] add derived for normalised quantities
- [x] analyse which quantities are not used at all via ES
- [ ] how to deal with "external" references (to other archive objects, to binary files, and files in general)
- [x] a set of public metadata for "elastic" properties (#407)
- [x] a set of public md properties
- [ ] further classification of non crystalline systems (e.g. amorphous solids/glass)
Related to: #308, #289, #358, #407v1.0.0-betaAlvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/408Rewrite/refactor of our API with FastAPI2021-05-06T08:47:28ZMarkus ScheidgenRewrite/refactor of our API with FastAPI
## Motivation
### Flask + REST Plus is bad
- outdated and not really maintained
- cumbersome, restrictive REST Plus specific request/response models
- argparser (Flask) and models (REST Plus) are unrelated
- no manipulation of generat...
## Motivation
### Flask + REST Plus is bad
- outdated and not really maintained
- cumbersome, restrictive REST Plus specific request/response models
- argparser (Flask) and models (REST Plus) are unrelated
- no manipulation of generated OpenAPI spec
- optimade Python tools are not based on Flask REST Plus
### Fast API seems to solve a lot of problems
- models based on type annotations (pydantic)
- object oriented models
- query parameter parsing from function parameter annotations
- based on more modern uvicorn; prod==dev server
### our API is messy
- lots of inconsistencies between repo/archive/raw/... related to queries, pagination, etc.
- lots of doubled CnC code pieces
- lots of functionality/parameter crammed into GET requests, e.g. `/repo/`
- *grown* tests
## Operations
Before we look at API operations (Fast API and Open API call the basic API building blocks *operations*),
we should look at our datamodel it's core *entities* and principle *views* on data.
### NOMAD entities
- (info)
- users
- uploads
- entries
- datasets
- materials
- (metainfo)
#### views
- metadata (repo)
- archive
- raw
- encyclopedia, currently a mix of metadata and archive
- processing/upload (should be just part of metadata)
In contrast to the old API, we structure the API based on entities. The *metadata* view is the default, and there is access to the other views via subpaths `raw`, `archive`.
#### models
Many operations use the same input and output structures over and over again:
- Query
- Pagination, includes response size, order, aggregation
- (Metadata|Archive)Required, a list or structure of things to include in return
- Statistics, additional histogram information on metadata results
- FileFiler, for file downloads
- Models for our entities: User, UserMetadata, Dataset, Entry, Upload
#### operations
|http |path |input |output |
|- |- |- |- |
|GET |`/entries` |SimpleQuery, Pagination, MetadataRequired |Metadata* |
|POST |`/entries/edit` |Query, UserMetadata, EditActions |Edit or EditActions |
|POST |`/entries/query` |Query, Statistics, Pagination, MetadataRequired, |Metadata*, Stat. |
|GET |`/entries/<id>` |MetadataRequired |Metadata |
|GET |`/entries/raw` |SimpleQuery, FileFilter |.zip+manifest |
|POST |`/entries/raw` |Query, FileFilter |.zip+manifest |
|GET |`/entries/<id>/raw` |FileFilter |.zip+manifest |
|GET |`/entries/<id>/raw/<path>` | |file |
|GET |`/entries/archive` |SimpleQuery, Pagination |Archive*|.zip+manifest |
|POST |`/entries/archive` |Query, ArchiveRequired, Pagination |Archive*|.zip+manifest |
|GET |`/entries/<id>/archive` |ArchiveRequired |Archive*|.json |
|GET |`/entries/<id>/archive/<path>` | |Archive*|json-value |
|GET |`/uploads` |SimpleQuery, Pagination, MetadataRequired |(Upload, Metadata*)* |
|POST |`/uploads/query` |Query, Statistics, Pagination, MetadataRequired |(Upload, Metadata*)* |
|PUT |`/uploads` |file |Upload |
|GET |`/uploads/<id>` |Pagination, MetadataRequired |Upload, Metadata* |
|POST |`/uploads/<id>` |Upload+Actions |Upload |
|GET |`/uploads/<id>/query` |Query, Statistics, Pagination, MetadataRequired |Upload, Metadata* |
|DELETE |`/uploads/<id>` | |Upload |
|GET |`/uploads/<id>/raw` |FileFilter |.zip+manifest |
|GET |`/uploads/<id>/<glob>` |FileFilter |.zip+manifest |
|GET |`/uploads/<id>/<path>` | |file |
|GET |`/datasets` |SimpleQuery, Pagination, MetadataRequired |(Dataset, Metadata*)* |
|POST |`/datasets/query` |Query, Statistics, Pagination, MetadataRequired |(Dataset, Metadata*)* |
|PUT |`/datasets` |Dataset |Dataset |
|GET |`/datasets/<id>` |SimpleQuery, Pagination, MetadataRequired |Dataset, Metadata* |
|GET |`/datasets/<id>/query` |Query, Statistics, Pagination, MetadataRequired |Dataset, Metadata* |
|POST |`/datasets/<id>` |Dataset+Actions |Dataset |
|DELETE |`/datasets/<id>` | |Dataset |
|GET |`/material` |SimpleQuery, Pagination, MetadataRequired |(Material, Metadata*)* |
|POST |`/material/query` |Query, Statistics, Pagination, MetadataRequired |(Material, Metadata*)* |
|GET |`/material/<id>` |SimpleQuery, Pagination, MetadataRequired |Material, Metadata* |
|GET |`/material/<id>/query` |Query, Statistics, Pagination, MetadataRequired |Material, Metadata* |
|GET |`/materials/suggestions` |? |Suggestion* |
|GET |`/groups` |SimpleQuery, Pagination, MetadataRequired |(Group, Metadata*)* |
|POST |`/groups/query` |Query, Statistics, Pagination, MetadataRequired |(Group, Metadata*)* |
|GET |`/groups/<id>` |SimpleQuery, Pagination, MetadataRequired |Group, Metadata* |
|GET |`/groups/<id>/query` |Query, Statistics, Pagination, MetadataRequired |Group, Metadata* |
|GET |`/group/<id>/query` |Query, Statistics, Pagination, MetadataRequired |Group, Metadata* |
|GET |`/users` |Pagination |User* |
|GET |`/users/me` | |User, Auth |
|GET |`/users/<id>` | |User |
|GET |`/info` | |Info |
|GET |`/metainfo` |MetainfoQuery |Archive* |
|GET |`/metainfo/<path>` | |Archive or json-value |
## Tasks
- [x] learn fast api and build skeleton API with a few model and static example response
- [x] think of a documentation scheme: swagger, reference, tutorials
- [x] restructure the tests based on the skeleton (test first)
- [ ] move the implementation step by step (search, upload, edit, archive, raw, metainfo, ...)
- [ ] implement everything
Ideally we would already have end-to-end GUI tests that use the API for "real"v0.10.0https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/407Workflows for elastic constants via elastic parser.2020-10-06T10:46:20ZMarkus ScheidgenWorkflows for elastic constants via elastic parser.Alvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/405Improved infrastructure for Materials-oriented data2020-11-03T06:51:39ZLauri HimanenImproved infrastructure for Materials-oriented data**Problem**
The way we store data related to materials deserves some additional thought, as we are increasingly getting involved with data and functionality that revolves around materials and not just entries/calculations.
We are alrea...**Problem**
The way we store data related to materials deserves some additional thought, as we are increasingly getting involved with data and functionality that revolves around materials and not just entries/calculations.
We are already including several external material-related data: AFLOW prototype data, springer data and most recently the similarity data. So unifying the way we ingest and store this kind of data is becoming important. This is also probably relevant for the eventual implementation of NOMAD Portal, which will aggregate material data from different sources.
After implementing the encyclopedia API with the current ES index, it has become clear that implementing a fast and flexible materials search cannot be done with only one index containing the data from entries. We will need a separate elasticsearch index for materials. This would make all material-oriented queries (encyclopedia) MUCH easier to implement and more performant.
**Metainfo**
I'm suggesting the following actions:
- Creating a new metainfo class called "Material". It will serve as our schema for all materials related information that is stored in ES/mongo/etc. This metainfo is completely separated from EntryArchive, so they do not share any parent metainfo. This approach allows us to consistently model the data contents and also handle the ES/mongo storage of the data through annotations. Notice that this metainfo is *separate* from the material related data that we store per entry. Initially only the similarity data will use this metainfo structure, but more can be added later.
- We need to create a common CLI interface for populating any materials data from external static sources (similarity, AFLOW, springer, etc.). These scripts would need to be called e.g. when setting up a new deployment. The actual data that is ingested should not be a part of the git repository, but instead should be linked through nomad.yml configuration variables. This is e.g. how the springer data is currently handled. To simplify things, only the similarity data will be initially handled with this mechanism and other data will be migrated later.
- When we refactor the way we store material-related data we can use this Material root metainfo as a single source that keeps getting updated as entries related to the material are added/removed. It would also serve as our primary search index for material-related queries.
**ElasticSearch**
On a technical level, we need to somehow manage calculation/material relationships with ES. This a fairly complex relationship, as there can be materials with zero to many calculations (zero in the future with the NOMAD Portal), and calculations with zero or 1 material. A typical child/parent relationship is thus not directly applicable. This is further complicated by the fact that some calculations are private or embargoed, meaning that they should only be visible in authenticated queries. There are, however, several ways to do this:
1. **Denormalization**: This is what we are doing currently. Each calculation carries information about its material.
PROS:
- This way we get along with one flat index. Thus the information about calculations is automatically in sync with the information shown for materials: deleted, embargoed and private calculations are automatically correctly handled for the material queries.
CONS:
- This increases the dataset for material queries 1000-fold, but this is not a problem for most queries. The main problem is that some more complex queries need to use deep aggregations to access information across the calculations. These operations are becoming prohibitively expensive.
2. [**Transforms**](https://www.elastic.co/guide/en/elasticsearch/reference/current/transform-overview.html): This is a mechanism that caches pre-defined aggregation results into a separate index for faster retrieval. The caching is typically done periodically.
PROS:
- This would allow us to mainly work with a flat index, and let ES do the bulk of the work for us. Thus the information about calculations is automatically (eventually, due to the refresh interval) in sync with the information shown for materials: deleted, embargoed and private calculations are automatically correctly handled for the material queries.
CONS:
- The downside is that have to work in the aggregation framework to define the cached results (becomes fairly difficult), there is no possibility to have materials with zero calculations (NOMAD Portal), and that we cannot manually affect the materials index layout or entries (e.g. inserting external information about similarities or prototypes). Also, there will be a delay between successful processing and the calculation showing up in the index.
3. **Completely separate index**: A new ES index with a custom data layout for material data that also duplicates a selected part of the calculation data. Basically denormalization once again, but this time the other way around: calculation data denormalized into a materials index. Syncing between material data and newly added calculations is done by automatically running ingestion pipelines when a new calculation entry is added. In order to handle calculation deletion and data visibility (embargoed, private), a smart data layout is needed together with processes that automatically update this index together with material data. Perhaps storing calculations as an inner/nested objects of a material would be a good option?
PROS:
- Maximum performance. Material queries will have the same time-complexity as queries for individual calculations, which is the best that ES can offer. All types of relationships are handled, adding external data is not a problem.
CONS:
- We have to be careful in setting up the syncing of data correctly. Especially handling the calculation visibility will become quite hard. Proper testing of the syncing is paramount.
- As the calculation data that is associated with a material is duplicated, the material index will become quite large. The order of magnitude will be similar to the calculations index.Lauri HimanenLauri Himanenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/403Clean up the experimental data2020-11-10T13:45:24ZMarkus ScheidgenClean up the experimental dataFor show-casing purposes the experimental section of NOMAD needs to be "cleaned"
- [x] rename CMS/EMS -> computational/experimental
- [x] fix the uploader and co-author names
- [x] fix other metadata like locations and dates
- [x] bette...For show-casing purposes the experimental section of NOMAD needs to be "cleaned"
- [x] rename CMS/EMS -> computational/experimental
- [x] fix the uploader and co-author names
- [x] fix other metadata like locations and dates
- [x] better metadata and experiment names
- [x] more data (e.g. automatised EELS indexing)
- [ ] EELS preview?
- [ ] maybe Markus Kühbach has real preview figs for his set
- [x] a disclaimer (in the search) about the "show-case" nature of the experimental section
- [ ] more databases to "index"
## show-cases the indexing of external databse
We could use [EELS](https://eelsdb.eu/spectra/) to show that NOMAD could crawl web-based databases to index. Simply Python web-scraping techniques should suffice to create an upload consisting of respective web-pages that "parsers" can convert into respective NOMAD metainfo data.
## show-case the indexing of external repositories
We could use zenoodo and its API to improve the metadata in NOMAD/experimental by downloading titles, descriptions, authors, etc.
## authors
There is a difference between the person uploading the metadata, i.e. the person providing the reference to the data and the authors of the data. The later usually given by an external database or repository. We need to reflect this in NOMAD user datamodel and add support for non NOMAD user authors: #404v0.10.0Markus ScheidgenMarkus Scheidgenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/398Unit/constant definition file2020-09-29T05:06:44ZLauri HimanenUnit/constant definition fileWe use the Pint library to define our unit conversion routines. It is used already extensively in the backend by normalizers etc. to perform the conversions. Pint is also what mostly defines our natural constants, as they are often used ...We use the Pint library to define our unit conversion routines. It is used already extensively in the backend by normalizers etc. to perform the conversions. Pint is also what mostly defines our natural constants, as they are often used in the conversions.
We should make our unit systems and natural constant definitions more explicit. As of now we rely on what the Pint package defines internally. A better approach would be to define these in a single file that is loaded by Pint and can also be served to other contexts. This would allow use to:
- Have a single source for all units and constants used across the whole package. This will enable us to centrally manage the units and avoid breaking changes if the defaults of Pint change.
- Communication of units/constants between backend<->frontend. The GUI will need to do unit conversion on the client, which requires syncing of this data from Python to Javascript. This will also affect the format of the definition file.Lauri HimanenLauri Himanenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/396Phonopy Workflows2020-12-18T16:07:58ZMarkus ScheidgenPhonopy Workflows- Phonopy parser to be an actual Phonopy parser and not just an FHI-aim phonopy parser
- Phonopy parser to uses the new workflow system- Phonopy parser to be an actual Phonopy parser and not just an FHI-aim phonopy parser
- Phonopy parser to uses the new workflow systemAlvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/394Parser readmes2020-12-18T16:07:58ZMarkus ScheidgenParser readmesAll parsers should get a new consistent readme. @Temok is already preparing tables with filenames (#381).
- put the new readme in each parser
- make the GUI display the readme on the upload page via click on parser/code on upload page ...All parsers should get a new consistent readme. @Temok is already preparing tables with filenames (#381).
- put the new readme in each parser
- make the GUI display the readme on the upload page via click on parser/code on upload page and upload view
- test if developing parsers with pypi package works
related to #375, #381Cuauhtemoc Salazartemok@physik.hu-berlin.deCuauhtemoc Salazartemok@physik.hu-berlin.dehttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/393Backend driven GUI artifacts2020-08-17T14:09:41ZMarkus ScheidgenBackend driven GUI artifactsThere is information in the GUI/used by the GUI that logically belongs to the backend. We have multiple mechanisms of synchronisation for these information. This needs to be unified
- json generated by CLI (e.g. `metainfo.json`)
- json ...There is information in the GUI/used by the GUI that logically belongs to the backend. We have multiple mechanisms of synchronisation for these information. This needs to be unified
- json generated by CLI (e.g. `metainfo.json`)
- json manually generated (e.g. `search_quantities.json`)
- version information generated by script during build
- hard-coded in the GUI (`domains.js`)
All these artifacts should be generated via cli command, e.g. `nomad dev gui-artifacts`. The `domain.js` data should come from the metainfo via domain extension.Markus ScheidgenMarkus Scheidgenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/389Refactor the processing and CLI to rely on metainfo instead of backend objects.2020-08-19T14:39:45ZMarkus ScheidgenRefactor the processing and CLI to rely on metainfo instead of backend objects.This is a continuation of #380
The idea is that the backend is only exposed to legacy parsers and not to datamodel, normalizer, new parsers, etc.This is a continuation of #380
The idea is that the backend is only exposed to legacy parsers and not to datamodel, normalizer, new parsers, etc.Markus ScheidgenMarkus Scheidgenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/380Normalizer should not use the backend anymore2020-08-13T13:23:02ZMarkus ScheidgenNormalizer should not use the backend anymoreWe are transitioning from backend to metainfo more and more. While most normalizer already use the metainfo in many cases, there are still some occurences where a normalizer uses its `self._backend` variable. This should be replaced. The...We are transitioning from backend to metainfo more and more. While most normalizer already use the metainfo in many cases, there are still some occurences where a normalizer uses its `self._backend` variable. This should be replaced. There are two candidates `self.entry_archive` and `self.section_run`.
@ladinesa Please omit everything in `nomad.normalizing.encyclopedia` for now.Alvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/367Cleanup the excinting-parser git tree2020-07-10T11:09:15ZMarkus ScheidgenCleanup the excinting-parser git treeAt some point multiple GB of test data have been added to the exciting-parser git repo. It is not even part of the current commit anymore. It needs to be stripped from the history, to keep clone time under control.At some point multiple GB of test data have been added to the exciting-parser git repo. It is not even part of the current commit anymore. It needs to be stripped from the history, to keep clone time under control.Version 0.8.2Markus ScheidgenMarkus Scheidgenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/363VASP geometry optimizations as workflows2020-12-18T16:07:57ZMarkus ScheidgenVASP geometry optimizations as workflows- [x] metainfo for workflows
- [x] adapt VASP parser
- [ ] normalizer (system, dos, encyclopedia, ?) and section_metadata use the workflow result system/calculation
![image](/uploads/aaa6b494cf0afc03d96372e3a338c973/image.png)- [x] metainfo for workflows
- [x] adapt VASP parser
- [ ] normalizer (system, dos, encyclopedia, ?) and section_metadata use the workflow result system/calculation
![image](/uploads/aaa6b494cf0afc03d96372e3a338c973/image.png)Alvin Noe LadinesAlvin Noe Ladines