nomad-FAIR issueshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues2021-02-11T06:11:59Zhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/488Adapt encyclopedia normalizer to workflows2021-02-11T06:11:59ZMarkus ScheidgenAdapt encyclopedia normalizer to workflowsThe encyclopedia normalizer depends on framesequence and sampling method to determine a calculation type. @ladinesa started to remove the use of these old metainfo definitions. @himanel1 should adapt the encyclopedia normalizer. For now ...The encyclopedia normalizer depends on framesequence and sampling method to determine a calculation type. @ladinesa started to remove the use of these old metainfo definitions. @himanel1 should adapt the encyclopedia normalizer. For now it should look for workflow information and use this. Only if no workflows are present, I should revert to the old behaviour. Once the migration to workflows is complete this should also be removed.
Currently (branch new-vasp-parser) this breaks the CI/CD because `tests/data/api/enc_private_material.zip::private/vasprun.xml` is parsed by the new vasp parser and no framesequence is generated anymore -> calc_type=unavailable -> no enc entry -> broken assertion in `tests/app/flask/test_api_encyclopedia.py::TestEncyclopedia::test_material`
Btw, the test case is terrible as it depends on multiple parsers, test data, etc. If one of these deps changes, its really hard to figure out, why this fails. Ideally this could be broken down into multiple test cases at some point (e.g. when implementing a new enc API based on flask api).v0.10.0Lauri HimanenLauri Himanenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/427Add a licence user-metadata in preparation for OASIS etc.2021-03-02T11:38:01ZMarkus ScheidgenAdd a licence user-metadata in preparation for OASIS etc.v0.10.0https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/496Allow admin user to show/download all data2021-03-02T11:38:01ZMarkus ScheidgenAllow admin user to show/download all dataThis will help debugging parsers (e.g. with errors on embargo entries) and the oasis.This will help debugging parsers (e.g. with errors on embargo entries) and the oasis.v0.10.0Markus ScheidgenMarkus Scheidgenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/428Analytics datasets2021-03-02T11:38:01ZMarkus ScheidgenAnalytics datasetsThese are datasets that can be created by anyone. They need to contain a list of calculations. Ideally also a checksum of the archive. Even more ideally a reference to the archive data at the moment of creation. The overall goal is to ha...These are datasets that can be created by anyone. They need to contain a list of calculations. Ideally also a checksum of the archive. Even more ideally a reference to the archive data at the moment of creation. The overall goal is to have a persistent "archive" of a dataset used for analysisv0.10.0Markus ScheidgenMarkus Scheidgenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/348A new framework for parsing semi structured textfiles2021-01-11T13:13:14ZMarkus ScheidgenA new framework for parsing semi structured textfilesThe NOMAD CoE parsers use the `simple_parser.py` module for most of the parsers. While this module contains a lot of good ideas, it is probably not a good for new parsers.
## Introduction
Why does the old library not work anymore?
- It ...The NOMAD CoE parsers use the `simple_parser.py` module for most of the parsers. While this module contains a lot of good ideas, it is probably not a good for new parsers.
## Introduction
Why does the old library not work anymore?
- It was build for a streaming backend that we don't need anymore. The streaming backend introduced lots of unnecessary constraints about the order that data can be produced. This meant that a lot of data had to be temporarily stored in local variables and lead to the introduction of the "CachingBackend" with a complex systems to declare temporary data.
- It assumes that the structure of the input text matches the structure of the metainfo. Again, lots of temporary data.
- It is easy to get lost in its "syntax" (look at parsers to see what I mean).
- The API is very convoluted with various parser, nested contexts, backend classes. Ideally a parser would use one class and one object.
- The API allowed to write parsers that are not reusable for multiple parse runs. We have to reinitialise and re-compile all the involved regular expressions all the time.
- The code that controls the parsing and the code that post-processes the parsed data are separated, which makes it hard to understand what happens.
- There are no re-usable modules for re-occuring patterns, e.g. for vectors, matrices, etc.
- There are lots of different parser interfaces (`ParserInterface`, `mainFunction`, `SmartParser`) that the infrastructure needs to address.
- Unit conversion was build into the parsing part and should have been handled by the backend/metainfo
- The general code organisation and quality in `python_common` is questionable. One big problem is that we cannot separate dependencies that all parsers need, from some that only a few need (e.g. `mdanalysis`, `mdtraj`).
- There is no common mechanism to deal with multiple files and *sub-parsers*.
We should create a new framework for developing parsers for semi structured textfiles for the following reasons:
- There will be new parsers to write for the new projects.
- It would give us a fighting change to modernise the existing parsers-
- There are requests for more code support, hence more parsers.
- It is really hard to explain someone the "WTFs" of the old framework, because most of the reasons are gone by no.
The new framework should have the following features:
- There is a simple interface. A parser consist of one class, instances can be reused.
- It uses the new metainfo Python library. Target quantities can be addressed by *paths*; arbitrary sections can be addressed; there is no need for *opening* and *closing* sections in a particular order.
- Support for dealing with files: a mechanism that abstracts from the actual files; developers should not *manually* `open` files. Ideally the framework allows to parse files directly from the uploaded zip-file.
- *This list should be extended over time*
## Exercise
@himanel1 @ladinesa @temokmx @mscheidg
We should all do this to brainstorm a new framework design.
Assume the new parser framework is already implemented. Using this fake input file and output metainfo as a references. What would the parser look like? Write some *pseudo* Python.
Input file, e.g. super_code.out:
```
2020/05/15
*** super_code v2 ***
system 1
--------
sites: H(1.23, 0, 0), H(-1.23, 0, 0), O(0, 0.33, 0)
latice: (0, 0, 0), (1, 0, 0), (1, 1, 0)
energy: 1.29372
*** This was done with magic source ***
*** x°42 ***
system 2
--------
sites: H(1.23, 0, 0), H(-1.23, 0, 0), O(0, 0.33, 0)
cell: (0, 0, 0), (1, 0, 0), (1, 1, 0)
energy: 1.29372
```
Output metainfo:
```python
{
"section_run": [
{
"code_name": "super_code",
"code_version": "2",
"time": "2020-05-15-00:00:00",
"section_system": [
{
"atom_labels": ["H", "H", "O"],
"atom_positions": [[6.23e-18, 0, 0 ], ...
"lattice_vectors": [[0, 0, 0]. ...]
},
{
"atom_labels": ["H", "H", "O"],
"atom_positions": [[6.23e-18, 0, 0 ], ...
"lattice_vectors": [[0, 0, 0]. ...]
}
],
"section_scc": [
{
"scc_to_system_ref": "/section_run/0/section_system/0",
"energy_total": 3.28172e-18
"x_super_code_sites_magic_sauce": "x°42"
},
{
"scc_to_system_ref": "/section_run/0/section_system/1",
"energy_total": 3.28172e-18
}
]
}
]
}
```
Please put your solutions in separate comments and discuss.v0.10.0Alvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/403Clean up the experimental data2020-11-10T13:45:24ZMarkus ScheidgenClean up the experimental dataFor show-casing purposes the experimental section of NOMAD needs to be "cleaned"
- [x] rename CMS/EMS -> computational/experimental
- [x] fix the uploader and co-author names
- [x] fix other metadata like locations and dates
- [x] bette...For show-casing purposes the experimental section of NOMAD needs to be "cleaned"
- [x] rename CMS/EMS -> computational/experimental
- [x] fix the uploader and co-author names
- [x] fix other metadata like locations and dates
- [x] better metadata and experiment names
- [x] more data (e.g. automatised EELS indexing)
- [ ] EELS preview?
- [ ] maybe Markus Kühbach has real preview figs for his set
- [x] a disclaimer (in the search) about the "show-case" nature of the experimental section
- [ ] more databases to "index"
## show-cases the indexing of external databse
We could use [EELS](https://eelsdb.eu/spectra/) to show that NOMAD could crawl web-based databases to index. Simply Python web-scraping techniques should suffice to create an upload consisting of respective web-pages that "parsers" can convert into respective NOMAD metainfo data.
## show-case the indexing of external repositories
We could use zenoodo and its API to improve the metadata in NOMAD/experimental by downloading titles, descriptions, authors, etc.
## authors
There is a difference between the person uploading the metadata, i.e. the person providing the reference to the data and the authors of the data. The later usually given by an external database or repository. We need to reflect this in NOMAD user datamodel and add support for non NOMAD user authors: #404v0.10.0Markus ScheidgenMarkus Scheidgenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/473Complex encyclopedia search2021-02-22T06:32:44ZMarkus ScheidgenComplex encyclopedia search- [x] New API feature for performing complex queries based on the Optimade language
- [x] Regression tests for the complex query API
- [x] Refactored GUI for building the complex queries together with new results layout. This is mostly u...- [x] New API feature for performing complex queries based on the Optimade language
- [x] Regression tests for the complex query API
- [x] Refactored GUI for building the complex queries together with new results layout. This is mostly up to Iker.v0.10.0Lauri HimanenLauri Himanenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/481Crystal parser very slow on some calcs2021-01-25T08:46:42ZMarkus ScheidgenCrystal parser very slow on some calcsWhile the new crystal parser performs well on most calculations, some are processed extremely slow. This example calc_id=`g4uv0JfJhVDonAYjtYnb6Z1udzEs` took over 2000s to process (the mainfile is only 200kb). This only happens on a few e...While the new crystal parser performs well on most calculations, some are processed extremely slow. This example calc_id=`g4uv0JfJhVDonAYjtYnb6Z1udzEs` took over 2000s to process (the mainfile is only 200kb). This only happens on a few entries (~200 of 4000).
I remember @himanel1 complaining about slow runs on the dev cluster. Maybe that has the same cause.v0.10.0Lauri HimanenLauri Himanenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/490Dict/JSON typed metainfo quantities2021-02-03T21:29:07ZMarkus ScheidgenDict/JSON typed metainfo quantitiesv0.10.0Markus ScheidgenMarkus Scheidgenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/478Different PBC in new vs old crystal parser2021-01-21T06:19:57ZMarkus ScheidgenDifferent PBC in new vs old crystal parserThe new crystal parsers assigns `(true, true, true)` to many (or all) calcs' periodic boundary conditions, where the old parser was assigning `(false, false, false)`. As a result many calcs become bulk or without system type that have be...The new crystal parsers assigns `(true, true, true)` to many (or all) calcs' periodic boundary conditions, where the old parser was assigning `(false, false, false)`. As a result many calcs become bulk or without system type that have been molecules on the old system.
You can compare the production (https://nomad-lab.eu/prod/rae/gui/search?visualization=system&dft.code_name=Crystal) with the reprocessing NOMAD installation (https://nomad-lab.eu/prod/rae/reprocess/gui/search?dft.code_name=Crystal&visualization=system)
An example calc where you see the different PBC: 6GPEE6tsj4sVMHkjuhboMyi5f_9u
@himanel1 could you investigate, it the new behaviour is indeed intended/correct?v0.10.0Lauri HimanenLauri Himanenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/475Exception in new exciting parser, cannot get system data2021-03-17T08:15:07ZMarkus ScheidgenException in new exciting parser, cannot get system dataThis happens to a large amount of exciting calculations (>4000). It's probably due to a specific exciting version.
One example `calc_id=9Y-5JMQpBip1lF5IrAd78XpxA6DU`:
Trace:
```
Traceback (most recent call last):
File "/app/nomad/pro...This happens to a large amount of exciting calculations (>4000). It's probably due to a specific exciting version.
One example `calc_id=9Y-5JMQpBip1lF5IrAd78XpxA6DU`:
Trace:
```
Traceback (most recent call last):
File "/app/nomad/processing/data.py", line 437, in parsing
self._parser_results, logger=logger)
File "/usr/local/lib/python3.7/site-packages/excitingparser/exciting_parser.py", line 2126, in parse
self.parse_configurations()
File "/usr/local/lib/python3.7/site-packages/excitingparser/exciting_parser.py", line 2051, in parse_configurations
sec_scc = parse_configuration(self.info_parser.get('groundstate'))
File "/usr/local/lib/python3.7/site-packages/excitingparser/exciting_parser.py", line 2042, in parse_configuration
sec_system = self.parse_system(section)
File "/usr/local/lib/python3.7/site-packages/excitingparser/exciting_parser.py", line 1955, in parse_system
positions = self.info_parser.get_atom_positions(section)
File "/usr/local/lib/python3.7/site-packages/excitingparser/exciting_parser.py", line 966, in get_atom_positions
positions = np.dot(positions, cell.magnitude)
File "<__array_function__ internals>", line 6, in dot
TypeError: Cannot cast array data from dtype('float64') to dtype('<U32') according to the rule 'safe'
```v0.10.0Alvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/476Exception in new FHI-aims parser2021-01-26T12:47:59ZMarkus ScheidgenException in new FHI-aims parserI hope this one is not random. It is also type/conversion related. Also see #475.
This happened for ~2000 calcs out of 100.000 calcs.
Example calc: yOVx7KJIL_8KdRVP_TOdyqYINPJ2
```
Traceback (most recent call last):
File "/app/nomad...I hope this one is not random. It is also type/conversion related. Also see #475.
This happened for ~2000 calcs out of 100.000 calcs.
Example calc: yOVx7KJIL_8KdRVP_TOdyqYINPJ2
```
Traceback (most recent call last):
File "/app/nomad/processing/data.py", line 444, in parsing
self._parser_results, logger=logger)
File "/usr/local/lib/python3.7/site-packages/fhiaimsparser/fhiaims_parser.py", line 1479, in parse
self.parse_method()
File "/usr/local/lib/python3.7/site-packages/fhiaimsparser/fhiaims_parser.py", line 1283, in parse_method
sec_method.x_fhi_aims_controlIn_relativistic_threshold = val[2]
File "/app/nomad/metainfo/metainfo.py", line 849, in __setattr__
return super().__setattr__(name, value)
File "/app/nomad/metainfo/metainfo.py", line 2111, in __set__
obj.m_set(self, value)
File "/app/nomad/metainfo/metainfo.py", line 956, in m_set
value = self.__to_np(quantity_def, value)
File "/app/nomad/metainfo/metainfo.py", line 934, in __to_np
value = quantity_def.type.type(value)
ValueError: could not convert string to float: 'n'
```v0.10.0Alvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/330Explicit datamodel support for external databases2023-12-21T15:39:04ZMarkus ScheidgenExplicit datamodel support for external databasesWe currently use key-uploaders to identify data from "external dbs". We also already have an external_id. We should add another repo datamodel quantity that identifies the external db that an entry comes from. This should be used to add ...We currently use key-uploaders to identify data from "external dbs". We also already have an external_id. We should add another repo datamodel quantity that identifies the external db that an entry comes from. This should be used to add a respective chart to the "user" tab in the search UI.
- [x] add names for the other dbs
- [x] change uploader to origin
- [ ] add the external_db metadata for the other dbs and reindexv0.10.0https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/430Fix the mail issues2021-01-11T14:46:09ZMarkus ScheidgenFix the mail issuesThere are several problems with the mail "system" and no real tests for it.
- its spam for many people
- google (as a recipient) says
'''
<some.user@googlemail.com>: host gmail-smtp-in.l.google.com[64.233.166.27]
said: 550-5.7.1 [13...There are several problems with the mail "system" and no real tests for it.
- its spam for many people
- google (as a recipient) says
'''
<some.user@googlemail.com>: host gmail-smtp-in.l.google.com[64.233.166.27]
said: 550-5.7.1 [130.xxx.xxx.xxx 11] Our system has detected that this
message is 550-5.7.1 not RFC 5322 compliant: 550-5.7.1 'From' header is
missing. 550-5.7.1 To reduce the amount of spam sent to Gmail, this message
has been 550-5.7.1 blocked. Please visit 550-5.7.1
https://support.google.com/mail/?p=RfcMessageNonCompliant 550 5.7.1 and
review RFC 5322 specifications for more information. xxxxxxxxxxxxxxx.272 -
gsmtp (in reply to end of DATA command)
'''
- the email addresses are not consistent
- do we still need to send emails?v0.10.0https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/304GUI Tests2021-06-18T07:50:45ZMarkus ScheidgenGUI TestsWhile having component tests is nice (especially for new component), we should start with end-to-end tests based on some example data. These end-to-end test should simply cover common end-user workflows like uploading, publishing, editin...While having component tests is nice (especially for new component), we should start with end-to-end tests based on some example data. These end-to-end test should simply cover common end-user workflows like uploading, publishing, editing, searching, downloading raw/archive data.v0.10.0Lauri HimanenLauri Himanenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/421Improvements to text_parser2021-06-17T17:15:14ZMarkus ScheidgenImprovements to text_parserThe usual stuff, mandatory:
- [x] documentation (including simple example)
- [x] proper tests (you can turn `test_parsing.py` into a directory `parsing` with the old `test_parsing.py` and new `test_text_parser.py`
- [x] add type annotati...The usual stuff, mandatory:
- [x] documentation (including simple example)
- [x] proper tests (you can turn `test_parsing.py` into a directory `parsing` with the old `test_parsing.py` and new `test_text_parser.py`
- [x] add type annotations to all public methods
- [ ] catch all matches to build a map of what has been extracted from the file and provide parser "telemetry"/statistics. In the old framework, we had an ANSI colored version of the parsed file. This was super helpful for debugging. This is also something we could show in the GUI, so you can see what was covered.
More specific suggestions:
- [x] it should be possible to initialise `Quantity` from a metainfo quantity and get name, type, shape, etc. from there
- [x] there should be automatic conversion from the unit in the text file to the unit attached to the metainfo Quantity via pint; the unit conversion method should not be called to_si, but to_metainfo_unit or something.
- [x] since Quantity is a class, str_operation should be provided via inheritance? Or at least there should be the option to also overwrite it in a subclass?
- [x] I do not like the idea of storing the values in Quantity. That introduces a very tight coupling between the two classes; makes it harder to clear out values between parser runs; what if someone gets the idea to use the same quantity in different parsers. The values should be stored in a separate mapping in UnstructuredTextFileParser. Quantity should only be providing the pattern and convert functions.
- [x] If you do this dynamic parsing thing, where you only parse if someone asks for a quantity, why not go all the way and only parse for that quantity?
- [x] You could also add __getattr__/__getattribute__ to allow more convenient `parser.quantity` style access.
- [ ] You could have an additional function that allows to pass a section. This function could automatically look and try parsing for all quantities in that section's definition and assign values accordingly.v0.10.0Alvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/456Make dependence on MDTraj and MDAnalysis optional2021-01-11T13:26:25ZMarkus ScheidgenMake dependence on MDTraj and MDAnalysis optionalThese two dependencies are not so easy to install and cause problems on a lot of systems. As our Python package gets more users, I would like to make MDTraj and MDAnalyis as optional as possible.
They are currently only used by some of ...These two dependencies are not so easy to install and cause problems on a lot of systems. As our Python package gets more users, I would like to make MDTraj and MDAnalyis as optional as possible.
They are currently only used by some of our MDParsers. Particularly Gromacs is making NOMAD fail, because MDAnalysis gets imported when the parser list is compiled in `/nomad/parsing/parsers.py`. We need to make sure that imports are optional; either with try/catch or by being imported only on use. Ideally the parsers will react to missing MDAanalysis and disable a few features on parsing with a logged warning.v0.10.0Alvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/431Make the error/warning about too large systems in the crystal visualization d...2020-10-26T07:06:39ZMarkus ScheidgenMake the error/warning about too large systems in the crystal visualization dismissableThere is an error when you open a system in the ArchiveBrowser that have more than 300 atoms in it. This should be a dismissable warning stating the number of atoms (and the limit for the warning). After the warning was dismissed, the vi...There is an error when you open a system in the ArchiveBrowser that have more than 300 atoms in it. This should be a dismissable warning stating the number of atoms (and the limit for the warning). After the warning was dismissed, the viewer should be shown regardless the performance risks.v0.10.0Lauri HimanenLauri Himanenhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/492Missing section system and section scc without parser error2021-03-17T08:15:07ZMarkus ScheidgenMissing section system and section scc without parser errorAt the end you find example calculations for each parser, where there are no parser errors, but still neither a section system nor section scc in the archive. At least with aims and exciting the numbers are lower with the old parsers.
C...At the end you find example calculations for each parser, where there are no parser errors, but still neither a section system nor section scc in the archive. At least with aims and exciting the numbers are lower with the old parsers.
Compared to the old parsers the differences in number roughly are
|code|missing systems|missing SCCs|
|-|-|-|
|aims|4.000|4.000|
|exciting|71|1.1000|
|abinit|1.200|3|
@ladinesa Please have a look at examples for:
- [x] aims
- [x] exciting
- [x] abinit
```json
{
"took": 107,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 24461,
"max_score": 0.0,
"hits": []
},
"aggregations": {
"code-name": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "FHI-aims",
"doc_count": 21550,
"uploads": {
"doc_count_error_upper_bound": 25,
"sum_other_doc_count": 1373,
"buckets": [
{
"key": "x3XJQjRoSpWBcbt64-v_Wg",
"doc_count": 16592,
"example": {
"hits": {
"total": 16592,
"max_score": 1.0,
"hits": [
{
"_index": "fairdi_nomad_reprocess_v0_8",
"_type": "doc",
"_id": "38o6zX9Nf8qJ0tXBN_ivHEv8-Dld",
"_score": 1.0,
"_source": {
"upload_id": "x3XJQjRoSpWBcbt64-v_Wg",
"mainfile": "to_Nomad/Matrix_embedded_2nm/Finite_differences/config_1273_p/control.out",
"dft": {
"code_name": "FHI-aims"
},
"calc_id": "38o6zX9Nf8qJ0tXBN_ivHEv8-Dld"
}
}
]
}
}
},
{
"key": "xKlQgmAtRD2zMMU-OFK27w",
"doc_count": 2702,
"example": {
"hits": {
"total": 2702,
"max_score": 1.0,
"hits": [
{
"_index": "fairdi_nomad_reprocess_v0_8",
"_type": "doc",
"_id": "-swwMQilYhOVhxeO49gRLNHk2g9h",
"_score": 1.0,
"_source": {
"upload_id": "xKlQgmAtRD2zMMU-OFK27w",
"mainfile": "green_kubo_2/backups/backup.02762/calculations/errors/aims.190421.scalapack.mpi.x.80s-263927,co4426.out",
"dft": {
"code_name": "FHI-aims"
},
"calc_id": "-swwMQilYhOVhxeO49gRLNHk2g9h"
}
}
]
}
}
},
{
"key": "UIb4mzHbRS2pmIO1NjDZ0w",
"doc_count": 240,
"example": {
"hits": {
"total": 240,
"max_score": 1.0,
"hits": [
{
"_index": "fairdi_nomad_reprocess_v0_8",
"_type": "doc",
"_id": "-smvxAYj3C12UcS9DxqjgZU1392a",
"_score": 1.0,
"_source": {
"upload_id": "UIb4mzHbRS2pmIO1NjDZ0w",
"mainfile": "ConvergenceParameters/TCNE_standing/system/PRECON_3.0/MIX_0.40/n_10/output.aims",
"dft": {
"code_name": "FHI-aims"
},
"calc_id": "-smvxAYj3C12UcS9DxqjgZU1392a"
}
}
]
}
}
},
{
"key": "UOm6k1H7SM6B60sSt9m1ow",
"doc_count": 182,
"example": {
"hits": {
"total": 182,
"max_score": 1.0,
"hits": [
{
"_index": "fairdi_nomad_reprocess_v0_8",
"_type": "doc",
"_id": "u_vMwaOHo8qGV_x3MOsciq0L5P81",
"_score": 1.0,
"_source": {
"upload_id": "UOm6k1H7SM6B60sSt9m1ow",
"mainfile": "bulk_references/C/C_graphite/140708_Graphite_interlayer/HSE+MBD/lattice_c_6.860/aims.out.old",
"dft": {
"code_name": "FHI-aims"
},
"calc_id": "u_vMwaOHo8qGV_x3MOsciq0L5P81"
}
}
]
}
}
},
{
"key": "E00BJDb2TKeyioTrsePgpg",
"doc_count": 117,
"example": {
"hits": {
"total": 117,
"max_score": 1.0,
"hits": [
{
"_index": "fairdi_nomad_reprocess_v0_8",
"_type": "doc",
"_id": "0mHnrwozwOPeb1-gnqacCzqwlLY3",
"_score": 1.0,
"_source": {
"upload_id": "E00BJDb2TKeyioTrsePgpg",
"mainfile": "Structure_Search/Substrates/(4, 2, -3, 3)/aims.out",
"dft": {
"code_name": "FHI-aims"
},
"calc_id": "0mHnrwozwOPeb1-gnqacCzqwlLY3"
}
}
]
}
}
},
{
"key": "UmaA5mM7SPS5fu7w8ZFwLg",
"doc_count": 104,
"example": {
"hits": {
"total": 104,
"max_score": 1.0,
"hits": [
{
"_index": "fairdi_nomad_reprocess_v0_8",
"_type": "doc",
"_id": "0CmBql1OUmO2wPfXYc19Y2iSAsvU",
"_score": 1.0,
"_source": {
"upload_id": "UmaA5mM7SPS5fu7w8ZFwLg",
"mainfile": "mnt/lxfs2/scratch/mazheika/for_sergey/part2/NiO/cluster/Ni19O14/0.0/charge+10/NiO-really.out-0.82-0.01",
"dft": {
"code_name": "FHI-aims"
},
"calc_id": "0CmBql1OUmO2wPfXYc19Y2iSAsvU"
}
}
]
}
}
},
{
"key": "YXUIZpw5RJyV3LAsFI2MmQ",
"doc_count": 68,
"example": {
"hits": {
"total": 68,
"max_score": 1.0,
"hits": [
{
"_index": "fairdi_nomad_reprocess_v0_8",
"_type": "doc",
"_id": "Ww15MpWMgMwVA0eT1h6Mcag2nPEi",
"_score": 1.0,
"_source": {
"upload_id": "YXUIZpw5RJyV3LAsFI2MmQ",
"mainfile": "SrTiO3/110/Ne2-tests4.5/0.25/PBE-light+tight-rho2-kerker.out",
"dft": {
"code_name": "FHI-aims"
},
"calc_id": "Ww15MpWMgMwVA0eT1h6Mcag2nPEi"
}
}
]
}
}
},
{
"key": "hSVvPAypRVmhmJB6Nuhnbw",
"doc_count": 62,
"example": {
"hits": {
"total": 62,
"max_score": 1.0,
"hits": [
{
"_index": "fairdi_nomad_reprocess_v0_8",
"_type": "doc",
"_id": "yRErQhwxLuKyauIXf-_-O0YXuXrO",
"_score": 1.0,
"_source": {
"upload_id": "hSVvPAypRVmhmJB6Nuhnbw",
"mainfile": "58_Ce/cebulk/real-space/fcc/fm/k_04/tier1/pbe0/restart_2.3/2.65/Ce.out",
"dft": {
"code_name": "FHI-aims"
},
"calc_id": "yRErQhwxLuKyauIXf-_-O0YXuXrO"
}
}
]
}
}
},
{
"key": "6cZnVKTLRIq27IATPtQNmQ",
"doc_count": 56,
"example": {
"hits": {
"total": 56,
"max_score": 1.0,
"hits": [
{
"_index": "fairdi_nomad_reprocess_v0_8",
"_type": "doc",
"_id": "3I0Kmn7Hh5LYqtqh1Nz1akaRTumv",
"_score": 1.0,
"_source": {
"upload_id": "6cZnVKTLRIq27IATPtQNmQ",
"mainfile": "scgw0/HCN/tier_2.out",
"dft": {
"code_name": "FHI-aims"
},
"calc_id": "3I0Kmn7Hh5LYqtqh1Nz1akaRTumv"
}
}
]
}
}
},
{
"key": "X2LLO0gKRvm75OMSihIZHA",
"doc_count": 54,
"example": {
"hits": {
"total": 54,
"max_score": 1.0,
"hits": [
{
"_index": "fairdi_nomad_reprocess_v0_8",
"_type": "doc",
"_id": "685nNqKQlrVwFNCIlO16BdmWtbWR",
"_score": 1.0,
"_source": {
"upload_id": "X2LLO0gKRvm75OMSihIZHA",
"mainfile": "Mg2Ox/O7/str78/pbe0.out",
"dft": {
"code_name": "FHI-aims"
},
"calc_id": "685nNqKQlrVwFNCIlO16BdmWtbWR"
}
}
]
}
}
}
]
}
},
{
"key": "exciting",
"doc_count": 64,
"uploads": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "TVT2lO_wSj-MQ_2rzG1XiA",
"doc_count": 63,
"example": {
"hits": {
"total": 63,
"max_score": 1.0,
"hits": [
{
"_index": "fairdi_nomad_reprocess_v0_8",
"_type": "doc",
"_id": "4Pqp6hHDmjuq7DxhdTLq6yj2T0DQ",
"_score": 1.0,
"_source": {
"upload_id": "TVT2lO_wSj-MQ_2rzG1XiA",
"mainfile": "1_2/Q0106_0506_0000_MOD005_DISP09/.INFOXS_par002.OUT",
"dft": {
"code_name": "exciting"
},
"calc_id": "4Pqp6hHDmjuq7DxhdTLq6yj2T0DQ"
}
}
]
}
}
},
{
"key": "ff8C1I9LTuiNVu5fyPVxnw",
"doc_count": 1,
"example": {
"hits": {
"total": 1,
"max_score": 1.0,
"hits": [
{
"_index": "fairdi_nomad_reprocess_v0_8",
"_type": "doc",
"_id": "7xVdVq0MpZe3QKP64Ylg8tZDSLuR",
"_score": 1.0,
"_source": {
"upload_id": "ff8C1I9LTuiNVu5fyPVxnw",
"mainfile": "AB@Graphene/interface/trans/BSE/ip/calc/INFOXS.OUT",
"dft": {
"code_name": "exciting"
},
"calc_id": "7xVdVq0MpZe3QKP64Ylg8tZDSLuR"
}
}
]
}
}
}
]
}
},
{
"key": "ABINIT",
"doc_count": 5,
"uploads": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "dkZ_BRDqSA2DeGsH3ItFyw",
"doc_count": 4,
"example": {
"hits": {
"total": 4,
"max_score": 1.0,
"hits": [
{
"_index": "fairdi_nomad_reprocess_v0_8",
"_type": "doc",
"_id": "Qp4sqtZIPs166omVKIJ4ahQgY4Ky",
"_score": 1.0,
"_source": {
"upload_id": "dkZ_BRDqSA2DeGsH3ItFyw",
"mainfile": "conv_ecut/Si.out0002",
"dft": {
"code_name": "ABINIT"
},
"calc_id": "Qp4sqtZIPs166omVKIJ4ahQgY4Ky"
}
}
]
}
}
},
{
"key": "OKYOLQhlTUqF01c8fvoG2w",
"doc_count": 1,
"example": {
"hits": {
"total": 1,
"max_score": 1.0,
"hits": [
{
"_index": "fairdi_nomad_reprocess_v0_8",
"_type": "doc",
"_id": "Bh8qTWHtlTV_UpsD0_9gZmRKfOym",
"_score": 1.0,
"_source": {
"upload_id": "OKYOLQhlTUqF01c8fvoG2w",
"mainfile": "conv_kpt/Si.out0001",
"dft": {
"code_name": "ABINIT"
},
"calc_id": "Bh8qTWHtlTV_UpsD0_9gZmRKfOym"
}
}
]
}
}
}
]
}
},
]
},
"errors": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "no \"representative\" section system found",
"doc_count": 24459
}
]
}
}
}
```v0.10.0Alvin Noe LadinesAlvin Noe Ladineshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/issues/418More OASIS Documentation2022-03-15T10:33:57ZMarkus ScheidgenMore OASIS Documentation- [ ] web page contents
- [ ] align docs with web page
- [ ] running the OASIS isolated
- [ ] running the OASIS with own keycloak
- [ ] general disclaimer on how to continue, volumes, etc.
- [ ] OASIS and the toolkit- [ ] web page contents
- [ ] align docs with web page
- [ ] running the OASIS isolated
- [ ] running the OASIS with own keycloak
- [ ] general disclaimer on how to continue, volumes, etc.
- [ ] OASIS and the toolkitv0.10.0