nomad-lab-base issueshttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-lab-base/-/issues2017-12-05T15:45:48Zhttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-lab-base/-/issues/42create a db initialization infrastructure2017-12-05T15:45:48ZMohamed, Fawzi Roberto (fawzi)fawzi.mohamed@fhi-berlin.mpg.decreate a db initialization infrastructureprobably using flink one should be able to scan all the data, and then stay up to date through a kafka queue.
One could think to create a queue also for the full rescan, thus unifying the initial setup with the "keeping up to date" phaseprobably using flink one should be able to scan all the data, and then stay up to date through a kafka queue.
One could think to create a queue also for the full rescan, thus unifying the initial setup with the "keeping up to date" phasehttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-lab-base/-/issues/41create a simple table visualization of some set of ids2017-12-05T15:45:48ZMohamed, Fawzi Roberto (fawzi)fawzi.mohamed@fhi-berlin.mpg.decreate a simple table visualization of some set of idsCreate an ui visualization building on the top of the metadata visualization.
It should be possible to select some metadata, and get a table like visualization of the given calculation ids.
Calculation ids can be passed in as input (...Create an ui visualization building on the top of the metadata visualization.
It should be possible to select some metadata, and get a table like visualization of the given calculation ids.
Calculation ids can be passed in as input (with some default)https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-lab-base/-/issues/39create a flexible normalization infrastructure2017-12-05T15:45:48ZMohamed, Fawzi Roberto (fawzi)fawzi.mohamed@fhi-berlin.mpg.decreate a flexible normalization infrastructureMany normalizations are common to several parsers, and some of them even need that several independent calculations.
Ideally we have a flink based infrastructure to perform these workflows when a new normalized calculation is added.Many normalizations are common to several parsers, and some of them even need that several independent calculations.
Ideally we have a flink based infrastructure to perform these workflows when a new normalized calculation is added.https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-lab-base/-/issues/24nail down nomadinfo format in netcdf (hdf5)2015-12-08T13:45:11ZMohamed, Fawzi Roberto (fawzi)fawzi.mohamed@fhi-berlin.mpg.denail down nomadinfo format in netcdf (hdf5)Currently the nomadmetainfo.nc format has some shortcomings (handling of multiple indexes in sections with multiple parents, ordering in indexes, does not use netcdf dimensions objects,... the group and dataset structure should be change...Currently the nomadmetainfo.nc format has some shortcomings (handling of multiple indexes in sections with multiple parents, ordering in indexes, does not use netcdf dimensions objects,... the group and dataset structure should be changed, which will affect all users.
This should be done as soon as possible before people begin to rely too much on itMohamed, Fawzi Roberto (fawzi)fawzi.mohamed@fhi-berlin.mpg.deMohamed, Fawzi Roberto (fawzi)fawzi.mohamed@fhi-berlin.mpg.dehttps://gitlab.mpcdf.mpg.de/nomad-lab/nomad-lab-base/-/issues/36Convert serialization format of parsers to Avro2015-12-08T11:24:05ZMohamed, Fawzi Roberto (fawzi)fawzi.mohamed@fhi-berlin.mpg.deConvert serialization format of parsers to AvroCurrently external parsers use json as format to serialize events.
This is portable and human readable, but inefficient.
[avro](https://avro.apache.org/) should be used instead.
This has two sides, a python one, and a scala one.
A ...Currently external parsers use json as format to serialize events.
This is portable and human readable, but inefficient.
[avro](https://avro.apache.org/) should be used instead.
This has two sides, a python one, and a scala one.
A preliminary avro protocol definition is in https://gitlab.rzg.mpg.de/nomad-lab/nomad-lab-base/blob/master/core/src/main/avro/ParseEvents.avsc