Commit 0fd36137 authored by Markus Scheidgen's avatar Markus Scheidgen
Browse files

Minor optimade fixed. Refactored api->app.

parent e6eceab7
Pipeline #61455 passed with stages
in 19 minutes and 17 seconds
......@@ -124,7 +124,7 @@ The component library [Material-UI](https://material-ui.com/)
### docker
To run a **nomad@FAIRDI** instance, many services have to be orchestrated:
the nomad api, nomad worker, mongodb, Elasticsearch, PostgreSQL, RabbitMQ,
the nomad app, nomad worker, mongodb, Elasticsearch, PostgreSQL, RabbitMQ,
Elasticstack (logging), the nomad GUI, and a reverse proxy to keep everything together.
Further services might be needed (e.g. JypiterHUB), when nomad grows.
The container platform [Docker](https://docs.docker.com/) allows us to provide all services
......
......@@ -3,7 +3,7 @@
## Introduction
The nomad infrastructure consists of a series of nomad and 3rd party services:
- nomad worker (python): task worker that will do the processing
- nomad api (python): the nomad REST API
- nomad app (python): the nomad app and it's REST APIs
- nomad gui: a small server serving the web-based react gui
- proxy: an nginx server that reverse proxyies all services under one port
- elastic search: nomad's search and analytics engine
......@@ -140,7 +140,7 @@ There are currently two different images and respectively two different docker f
`Dockerfile`, and `gui/Dockerfile`.
Nomad comprises currently two services,
the *worker* (does the actual processing), and the *api*. Those services can be
the *worker* (does the actual processing), and the *app*. Those services can be
run from one image that have the nomad python code and all dependencies installed. This
is covered by the `Dockerfile`.
......@@ -170,7 +170,7 @@ It is sufficient to use the implicit `docker-compose.yml` only (like in the comm
The `override` will be used automatically.
Now we can build the *docker-compose* that contains all external services (rabbitmq,
mongo, elastic, elk) and nomad services (worker, api, gui).
mongo, elastic, elk) and nomad services (worker, app, gui).
```
docker-compose build
```
......@@ -198,7 +198,7 @@ docker-compose down
### Run containers selectively
The following services/containers are managed via our docker-compose:
- rabbitmq, mongo, elastic, (elk, only for production)
- worker, api
- worker, app
- gui
- proxy
......@@ -209,7 +209,7 @@ You can also run services selectively, e.g.
```
docker-compose up -d rabbitmq, mongo, elastic
docker-compose up worker
docker-compose up api gui proxy
docker-compose up app gui proxy
```
## Accessing 3'rd party services
......@@ -237,7 +237,7 @@ to use the right ports (see above).
## Run nomad services manually
You can run the worker, api, and gui as part of the docker infrastructure, like
You can run the worker, app, and gui as part of the docker infrastructure, like
seen above. But, of course there are always reasons to run them manually during
development, like running them in a debugger, profiler, etc.
......@@ -253,14 +253,14 @@ To run it directly with celery, do (from the root)
celery -A nomad.processing worker -l info
```
Run the api via docker, or (from the root):
Run the app via docker, or (from the root):
```
nomad admin run api
nomad admin run app
```
You can also run worker and api together:
You can also run worker and app together:
```
nomad admin run apiworker
nomad admin run appworker
```
### GUI
......@@ -376,7 +376,7 @@ Here are some example launch configs for VSCode:
"request": "launch",
"module": "flask",
"env": {
"FLASK_APP": "nomad/api/__init__.py"
"FLASK_APP": "nomad/app/__init__.py"
},
"args": [
"run",
......
......@@ -14,6 +14,7 @@
from flask_restplus import Resource, abort
from flask import request
from elasticsearch_dsl import Q
from nomad import search
from nomad.metainfo.optimade import OptimadeEntry
......@@ -24,6 +25,9 @@ from .models import json_api_single_response_model, entry_listing_endpoint_parse
from .filterparser import parse_filter, FilterException
ns = api.namespace('', description='The (only) API namespace with all OPTiMaDe endpoints.')
# TODO replace with decorator that filters response_fields
def base_request_args():
if request.args.get('response_format', 'json') != 'json':
......@@ -34,7 +38,12 @@ def base_request_args():
return properties_str.split(',')
return None
ns = api.namespace('optimade', description='The (only) API namespace with all OPTiMaDe endpoints.')
def base_search_request():
""" Creates a search request for all public and optimade enabled data. """
return search.SearchRequest().owner('all', None).query(
Q('exists', field='optimade.nelements')) # TODO use the elastic annotations when done
@ns.route('/calculations')
class CalculationList(Resource):
......@@ -55,7 +64,8 @@ class CalculationList(Resource):
except Exception:
abort(400, message='bad parameter types') # TODO Specific json API error handling
search_request = search.SearchRequest().owner('all', None)
search_request = base_search_request()
if filter is not None:
try:
search_request.query(parse_filter(filter))
......@@ -95,7 +105,7 @@ class Calculation(Resource):
def get(self, id: str):
""" Retrieve a single calculation for the given id. """
request_fields = base_request_args()
search_request = search.SearchRequest().owner('all', None).search_parameters(calc_id=id)
search_request = base_search_request().search_parameters(calc_id=id)
result = search_request.execute_paginated(
page=1,
......
......@@ -29,8 +29,8 @@ from .api import api, base_url, url
# TODO error/warning objects
json_api_meta_object_model = api.model('JsonApiMetaObject', {
'query': fields.Nested(model=api.model('JsonApiQuery', {
json_api_meta_object_model = api.model('MetaObject', {
'query': fields.Nested(model=api.model('Query', {
'representation': fields.String(
required=True,
description='A string with the part of the URL following the base URL')
......@@ -55,7 +55,7 @@ json_api_meta_object_model = api.model('JsonApiMetaObject', {
'provider': fields.Nested(
required=True, skip_none=True,
description='Information on the database provider of the implementation.',
model=api.model('JsonApiProvider', {
model=api.model('Provider', {
'name': fields.String(
required=True,
description='A short name for the database provider.'),
......@@ -89,7 +89,7 @@ json_api_meta_object_model = api.model('JsonApiMetaObject', {
'implementation': fields.Nested(
required=False, skip_none=True,
description='Server implementation details.',
model=api.model('JsonApiImplementation', {
model=api.model('Implementation', {
'name': fields.String(
description='Name of the implementation'),
'version': fields.String(
......@@ -99,7 +99,7 @@ json_api_meta_object_model = api.model('JsonApiMetaObject', {
'maintainer': fields.Nested(
skip_none=True,
description='Details about the maintainer of the implementation',
model=api.model('JsonApiMaintainer', {
model=api.model('Maintainer', {
'email': fields.String()
})
)
......@@ -132,7 +132,7 @@ class Meta():
maintainer=dict(email='markus.scheidgen@physik.hu-berlin.de'))
json_api_links_model = api.model('JsonApiLinks', {
json_api_links_model = api.model('Links', {
'base_url': fields.String(
description='The base URL of the implementation'),
......@@ -172,7 +172,7 @@ def Links(endpoint: str, available: int, page_number: int, page_limit: int, **kw
return result
json_api_response_model = api.model('JsonApiResponse', {
json_api_response_model = api.model('Response', {
'links': fields.Nested(
required=False,
description='Links object with pagination links.',
......@@ -192,7 +192,7 @@ json_api_response_model = api.model('JsonApiResponse', {
'included are known as compound documents in the JSON API.'))
})
json_api_data_object_model = api.model('JsonApiDataObject', {
json_api_data_object_model = api.model('DataObject', {
'type': fields.String(
description='The type of the object [structure or calculations].'),
......@@ -254,7 +254,7 @@ class Property:
json_api_single_response_model = api.inherit(
'JsonApiSingleResponse', json_api_response_model, {
'SingleResponse', json_api_response_model, {
'data': fields.Nested(
model=json_api_data_object_model,
required=True,
......@@ -262,7 +262,7 @@ json_api_single_response_model = api.inherit(
})
json_api_list_response_model = api.inherit(
'JsonApiSingleResponse', json_api_response_model, {
'SingleResponse', json_api_response_model, {
'data': fields.List(
fields.Nested(json_api_data_object_model),
required=True,
......
......@@ -53,7 +53,7 @@ def run_worker():
@run.command(help='Run both app and worker.')
def apiworker():
def appworker():
executor = ProcessPoolExecutor(2)
loop = asyncio.get_event_loop()
loop.run_in_executor(executor, run_app)
......
......@@ -43,7 +43,7 @@ import io
import json
from nomad import utils, infrastructure, files, config
from nomad.coe_repo import User, Calc, LoginException
from nomad.coe_repo import User, Calc
from nomad.datamodel import CalcWithMetadata
from nomad.processing import FAILURE
......
......@@ -357,6 +357,8 @@ class SearchRequest:
""" Adds the given query as a 'and' (i.e. 'must') clause to the request. """
self._query &= query
return self
def time_range(self, start: datetime, end: datetime):
""" Adds a time range to the query. """
if start is None and end is None:
......
......@@ -15,7 +15,7 @@ We use docker-compose overrides to extend a base configuration for different sce
Example docker-compose usage:
```
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d api
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d app
```
The different overrides are:
......@@ -25,7 +25,7 @@ The different overrides are:
- *.example.yml, an example production configuration
To run your own NOMAD mirror or oasis, you should override the `example.yml` to fit your needs.
Within the overrides you can configure the NOMAD containers (api, worker, gui).
Within the overrides you can configure the NOMAD containers (app, worker, gui).
### Configuring the API and worker
......
......@@ -56,8 +56,8 @@ services:
links:
- elk
# nomad api
api:
# nomad app
app:
restart: 'no'
image: nomad/backend
environment:
......
version: '3.4'
services:
api:
app:
image: gitlab-registry.mpcdf.mpg.de/nomad-lab/nomad-fair:v0.5.3
volumes:
- "./example_nomad/nomad.yaml:/app/nomad.yaml"
......
......@@ -45,8 +45,8 @@ services:
build: ../../../
image: nomad/backend
# nomad api
api:
# nomad app
app:
restart: 'no'
image: nomad/backend
depends_on:
......
......@@ -50,7 +50,7 @@ services:
- 5000:5000
- 9201:9200 # allows metricbeat config to access es
api:
app:
environment:
NOMAD_LOGSTASH_LEVEL: INFO
links:
......
......@@ -83,8 +83,8 @@ services:
- nomad_files:/app/.volumes/fs
command: python -m celery worker -l info -A nomad.processing -Q celery,cacls,uploads
# nomad api
api:
# nomad app
app:
restart: always
image: gitlab-registry.mpcdf.mpg.de/nomad-lab/nomad-fair:latest
container_name: nomad_api
......@@ -107,7 +107,7 @@ services:
container_name: nomad_gui
command: nginx -g 'daemon off;'
links:
- api
- app
volumes:
nomad_postgres:
......
window.nomadEnv = {
"apiBase": "/example-nomad/api",
"appBase": "/example-nomad",
"kibanaBase": "/fairdi/kibana",
"debug": false
};
\ No newline at end of file
......@@ -5,7 +5,7 @@ The NOMAD chart is part of the
[NOMAD source code](https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR)
and can be found under `ops/helm/nomad`.
This chart allows to run the nomad api, worker, gui, and proxy in a kubernetes cluster.
This chart allows to run the nomad app, worker, gui, and proxy in a kubernetes cluster.
The `values.yaml` contains more documentation on the different values.
The chart can be used to run multiple nomad instances in parallel on the same cluster,
......
apiVersion: v1
kind: Service
metadata:
name: {{ include "nomad.fullname" . }}-api
name: {{ include "nomad.fullname" . }}-app
labels:
app.kubernetes.io/name: {{ include "nomad.name" . }}-api
app.kubernetes.io/name: {{ include "nomad.name" . }}-app
helm.sh/chart: {{ include "nomad.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
......@@ -15,5 +15,5 @@ spec:
protocol: TCP
name: http
selector:
app.kubernetes.io/name: {{ include "nomad.name" . }}-api
app.kubernetes.io/name: {{ include "nomad.name" . }}-app
app.kubernetes.io/instance: {{ .Release.Name }}
......@@ -26,7 +26,7 @@ data:
api_secret: "{{ .Values.api.secret }}"
admin_password: "{{ .Values.api.adminPassword }}"
disable_reset: {{ .Values.api.disableReset }}
https: {{ .Values.api.https }}
https: {{ .Values.app.https }}
upload_limit: {{ .Values.api.uploadLimit }}
rabbitmq:
host: "{{ .Release.Name }}-rabbitmq"
......
......@@ -24,8 +24,9 @@ images:
tag: latest
pullPolicy: IfNotPresent
## Everthing concerning the nomad api
api:
## Everything concerning the nomad app
app:
replicas: 1
https: true
## Number of gunicorn worker.
......@@ -36,6 +37,10 @@ api:
workerClass: 'gthread'
console_loglevel: INFO
logstash_loglevel: INFO
nomadNodeType: "public"
## Everything concerning the nomad api
api:
## Secret used as cryptographic seed
secret: "defaultApiSecret"
## The adminstrator password (only way to ever set/change it)
......@@ -46,7 +51,6 @@ api:
disableReset: "true"
## Limit of unpublished uploads per user, except admin user
uploadLimit: 10
nomadNodeType: "public"
## Everything concerning the nomad worker
worker:
......
......@@ -45,7 +45,10 @@ def test_get_entry(published: Upload):
assert 'optimade' in search_result
def create_test_structure(meta_info, id: int, h: int, o: int, extra: List[str], periodicity: int):
def create_test_structure(
meta_info, id: int, h: int, o: int, extra: List[str], periodicity: int,
without_optimade: bool = False):
atom_labels = ['H' for i in range(0, h)] + ['O' for i in range(0, o)] + extra
test_vector = np.array([0, 0, 0])
......@@ -71,9 +74,25 @@ def create_test_structure(meta_info, id: int, h: int, o: int, extra: List[str],
upload_id='test_uload_id', calc_id='test_calc_id_%d' % id, mainfile='test_mainfile',
published=True, with_embargo=False)
calc.apply_domain_metadata(backend)
if without_optimade:
calc.optimade = None
search.Entry.from_calc_with_metadata(calc).save()
def test_no_optimade(meta_info, elastic, api):
create_test_structure(meta_info, 1, 2, 1, [], 0)
create_test_structure(meta_info, 2, 2, 1, [], 0, without_optimade=True)
search.refresh()
rv = api.get('/calculations')
assert rv.status_code == 200
data = json.loads(rv.data)
assert data['meta']['data_returned'] == 1
@pytest.fixture(scope='module')
def example_structures(meta_info, elastic_infra):
clear_elastic(elastic_infra)
......@@ -142,7 +161,7 @@ def test_list_endpoint(api, example_structures):
assert rv.status_code == 200
data = json.loads(rv.data)
# TODO replace with real assertions
print(json.dumps(data, indent=2))
# print(json.dumps(data, indent=2))
def test_list_endpoint_request_fields(api, example_structures):
......@@ -150,7 +169,7 @@ def test_list_endpoint_request_fields(api, example_structures):
assert rv.status_code == 200
data = json.loads(rv.data)
# TODO replace with real assertions
print(json.dumps(data, indent=2))
# print(json.dumps(data, indent=2))
def test_single_endpoint(api, example_structures):
......@@ -158,7 +177,7 @@ def test_single_endpoint(api, example_structures):
assert rv.status_code == 200
data = json.loads(rv.data)
# TODO replace with real assertions
print(json.dumps(data, indent=2))
# print(json.dumps(data, indent=2))
def test_base_info_endpoint(api):
......@@ -166,7 +185,7 @@ def test_base_info_endpoint(api):
assert rv.status_code == 200
data = json.loads(rv.data)
# TODO replace with real assertions
print(json.dumps(data, indent=2))
# print(json.dumps(data, indent=2))
def test_calculation_info_endpoint(api):
......@@ -174,7 +193,7 @@ def test_calculation_info_endpoint(api):
assert rv.status_code == 200
data = json.loads(rv.data)
# TODO replace with real assertions
print(json.dumps(data, indent=2))
# print(json.dumps(data, indent=2))
# TODO test single with request_fields
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment