diff --git a/docs/archive.md b/docs/archive.md
new file mode 100644
index 0000000000000000000000000000000000000000..b8649d3f925d08b9cd39aa055e62a58aa27746cb
--- /dev/null
+++ b/docs/archive.md
@@ -0,0 +1,142 @@
+# Using the Archive and Metainfo
+
+## Introduction
+
+NOMAD stores all processed data in a *well defined*, *structured*, and *machine readable*
+format. Well defined means that each element is backed by a formal definition that provides
+a name, description, location, shape, type, and potential unit for that data. It has a
+hierarchical structure that logically organizes data in sections and sub-sections and allows
+to include cross-references between pieces of data. Formal definitions and corresponding
+data structures allow to machine process NOMAD data.
+
+![archive example](assets/archive-example.png)
+#### The Metainfo is the schema for Archive data.
+The Archive stores descriptive and structured information about materials-science
+data. Each entry in NOMAD is associated with one Archive that contains all the processed
+information of that entry. What information can possible exist in an archive, how this
+information is structured, and how this information is to be interpreted is governed
+by the Metainfo.
+
+#### On schemas and definitions
+Each piece of Archive data has a formal definition in the Metainfo. These definitions
+provide data types with names, descriptions, categories, and further information that
+applies to all incarnations of a certain data type.
+
+Consider a simulation `Run`. Each
+simulation run in NOMAD is characterized by a *section*, is called *run*, can contain
+*calculation* results, simulated *systems*, applied *methods*, the used *program*, etc.
+What constitutes a simulation run is *defined* in the metainfo with a *section definition*.
+All other elements in the Archive (e.g. *calculation*, *system*, ...) have similar definitions.
+
+Definitions follow a formal model. Depending on the definition type, each definition
+has to provide certain information: *name*, *description*, *shape*, *units*, *type*, etc.
+
+#### Types of definitions
+
+- *Sections* are the building block for hierarchical data. A section can contain other
+  sections (via *sub-sections*) and data (via *quantities*).
+- *Sub-sections* define a containment relationship between sections.
+- *Quantities* define a piece of data in a section.
+- *References* are special quantities that allow to define references from a section to
+  another section or quantity.
+- *Categories* allow to categorize definitions.
+- *Packages* are used to organize definitions.
+
+#### Interfaces
+NOMAD Metainfo is kept independent of the actual storage format and is not bound to any
+specific storage method. In our practical implementation, we use a binary form of JSON,
+called [msgpack](https://msgpack.org/) on our servers and provide Archive data as JSON via
+our API. For NOMAD end-users the internal storage format is of little relevance, because
+the archive data is solely served by NOMAD's API. On top of the JSON API data, the
+[NOMAD Python package](pythonlib.md) provides a more convenient interface for Python users.
+
+
+## Archive JSON interface
+
+The [API section](api.md#access-archives) demonstrates how to access an Archive. The
+API will give you JSON data likes this:
+
+```json title="https://nomad-lab.eu/prod/v1/api/v1/entries/--dLZstNvL_x05wDg2djQmlU_oKn/archive"
+{
+    "run": [
+        {
+            "program": {...},
+            "method": [...],
+            "system": [
+                {...},
+                {...},
+                {...},
+                {...},
+                {
+                    "type": "bulk",
+                    "configuration_raw_gid": "-ZnDK8gT9P3_xtArfKlCrDOt9gba",
+                    "is_representative": true,
+                    "chemical_composition": "KKKGaGaGaGaGaGaGaGaGa",
+                    "chemical_composition_hill": "Ga9K3",
+                    "chemical_composition_reduced": "K3Ga9",
+                    "atoms": {...},
+                    "springer_material": [...],
+                    "symmetry": [...]
+                }
+            ]
+            "calculation": [...],
+        }
+    ],
+    "workflow": [...],
+    "metadata": {...},
+    "results":{
+        "material": {...},
+        "method": {...},
+        "properties": {...},
+    }
+}
+```
+
+This will show you the Archive as a hierarchy of JSON objects (each object is a section),
+where each key is a property (e.g. a quantity or sub-section). Of course you can use
+this data in this JSON form. You can expect that the same keys (each item has a formal
+definition) always provides the same type of data. But, not all keys are present in
+each archive, not all lists might have the same amount of objects. This depends on the
+data. For example, some *runs* contain many systems (e.g. geometry optimization), others
+don't; typically *bulk* systems will have *symmetry* data, non bulk systems might not.
+To learn what each key means, you need to look-up its definition in the Metainfo.
+
+{{ metainfo_data() }}
+
+
+## Archive Python interface
+
+In Python, you JSON data is typically represented as nested combinations of dictionaries
+and lists. Of course, you could work with this right away. To make it easier for Python
+programmers the [NOMAD Python package](pythonlib.md), will allow you to use this
+JSON data with a more high level interface that give the following advantages:
+- code completion in dynamic coding environments like Jupyter notebooks
+- a cleaner syntax that uses attributes instead of dictionary access
+- all higher dimensional numerical data is represented as numpy arrays
+- allows to navigate references
+- numerical data has a Pint unit attached to it
+
+For each section the Python package contains a Python class that corresponds to its
+definition in the metainfo. You can use these classes to access `json_data` downloaded
+via API:
+```python
+from nomad.datamodel import EntryArchive
+
+archive = EntryArchive.m_from_dict(json_data)
+calc = archive.run[0].calculation[-1]
+total_energy_in_ev = calc.energy.total.value.to(units.eV).m
+formula = calc.system_ref.chemical_formula_reduced
+```
+
+Archive data can also be serialized into JSON again:
+```python
+import json
+
+print(json.dumps(calc.m_to_dict(), indent=2))
+```
+
+## Metainfo Python interface
+
+To learn more about the Python interface, look at the [Metainfo documentation](metainfo.md)
+that explains how the underlying Python classes work, and how you can extend the
+metainfo by providing your own classes.
diff --git a/docs/index.md b/docs/index.md
index ddf0bbda0648519a6b13081c5a99525cda76a7ab..99980e6b705df379f7d939181b6e62f7ee7c0f8b 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -5,21 +5,15 @@ science data. Here, NOMAD is a [web-application and database](https://nomad-lab.
 that allows to centrally publish data. But you can also use the NOMAD software to build your
 own local [NOMAD Oasis](../oasis/).
 
-<div style="display: flex; flex-direct: row; align-items: start; gap: 16px;">
-  <div style="flex-grow: 1; min-width: 400px">
-    <p><i>More than 12 million of simulations from over 400 authors world-wide</i></p>
-    <ul>
-      <li>Free publication and sharing data of data</li>
-      <li>Manage research data though its whole life-cycle</li>
-      <li>Extracts <b>rich metadata</b> from data automatically</li>
-      <li>All data in <b>raw</b> and <b>machine processable</b> form</li>
-      <li>Use integrated tools to <b>explore</b>, <b>visualize</b>, and <b>analyze</b></li>
-    </ul>
-  </div>
-  <div>
-    <img src="../assets/nomad-hero-shot.png"/>
-  </div>
-</div>
+
+![NOMAD](assets/nomad-hero-shot.png){ align=right width=400 }
+*More than 12 million of simulations from over 400 authors world-wide*
+
+- Free publication and sharing data of data
+- Manage research data though its whole life-cycle
+- Extracts <b>rich metadata</b> from data automatically
+- All data in <b>raw</b> and <b>machine processable</b> form
+- Use integrated tools to <b>explore</b>, <b>visualize</b>, and <b>analyze</b>
 
 ## History of NOMAD
 NOMAD was founded 2014 as a repository for sharing electronic structure code files.
@@ -63,8 +57,7 @@ Each *sections* comprised a set of *quantities* on a common subject.
 All *sections* and *quantities* are backed by a formal schema that defines names, descriptions, types, shapes, and units.
 We sometimes call this data *archive* and the schema *metainfo*.
 
-### NOMAD datamodel: *uploads*, *entries*, *files*, *datasets*
-![datamodel](assets/datamodel.png)
+### Datamodel: *uploads*, *entries*, *files*, *datasets*
 
 Uploaded *raw* files are managed in *uploads*.
 Users can create *uploads* and use them like projects.
@@ -72,6 +65,11 @@ You can share them with other users, incrementally add and modify data in them,
 As long as an *upload* is not published, you can continue to provide files, delete the upload again, or test how NOMAD is processing your files.
 Once an upload is published, it becomes immutable.
 
+<figure markdown>
+  ![datamodel](assets/datamodel.png){ width=600 }
+  <figcaption>NOMAD's main entities</figcaption>
+</figure>
+
 An *upload* can contain an arbitrary directory structure of *raw* files.
 For each recognized *mainfile*, NOMAD creates an entry.
 An *upload* therefore contains a list of *entries*.
@@ -80,12 +78,16 @@ Each *entry* is associated with its *mainfile*, an *archive*, and all other *aux
 *Entries* (of many uploads) can be manually curated into *datasets*. You can get a DOI for *datasets*.
 
 ### Using NOMAD software locally (the Oasis)
-![oasis use-cases](assets/oasis-use-cases.png)
 
 The software that runs NOMAD is Open-Source and can be used independently of the NOMAD
 *central installation* that is run at [http://nomad-lab.eu](http://nomad-lab.eu).
 We call any NOMAD installation that is not the *central* one a NOMAD Oasis.
 
+<figure markdown>
+  ![oasis use-cases](assets/oasis-use-cases.png){ width=700 }
+  <figcaption>NOMAD Oasis use-cases</figcaption>
+</figure>
+
 There are several use-cases how the NOMAD software could be used. Of course other
 uses and hybrids are imaginable:
 
@@ -101,13 +103,16 @@ This what we do in the [FAIRmat project](https://www.fair-di.eu/fairmat/consorti
 
 ### A containerized cloud enabled architecture
 
-![nomad architecture](assets/architecture.png)
-
 NOMAD is a modern web-application that requires a lot of services to run. Some are
 NOMAD specific, others are 3rd party products. While all services can be traditionally
 installed and run on a single sever, NOMAD advocates the use of containers and operating
 NOMAD in a cloud environment.
 
+<figure markdown>
+  ![nomad architecture](assets/architecture.png)
+  <figcaption>NOMAD architecture</figcaption>
+</figure>
+
 NOMAD comprises two main services, its *app* and the *worker*. The *app* services
 our API, graphical user interface, and documentation. It is the outward facing part of
 NOMAD. The worker runs all the processing (parsing, normalization). Their separation allows
@@ -130,14 +135,17 @@ to constantly produce a latest version of docker image and Python package.
 
 ### NOMAD uses a modern and rich stack frameworks, systems, and libraries
 
-![nomad stack](assets/stack.png)
-
 Besides various scientific computing, machine learning, and computational material
 science libraries (e.g. numpy, skikitlearn, tensorflow, ase, spglib, matid, and many more),
 Nomad uses a set of freely available technologies that already solve most
 of its processing, storage, availability, and scaling goals. The following is a non
 comprehensive overview of used languages, libraries, frameworks, and services.
 
+<figure markdown>
+  ![nomad stack](assets/stack.png)
+  <figcaption>NOMAD components and dependencies</figcaption>
+</figure>
+
 #### Python 3
 
 The *backend* of nomad is written in Python. This includes all parsers, normalizers,
diff --git a/docs/metainfo.md b/docs/metainfo.md
index f40592913994c5c06dd88f63c990f4045e0e84c7..4f9d5cfb7a69ddb4ddee803244a058c704dc556c 100644
--- a/docs/metainfo.md
+++ b/docs/metainfo.md
@@ -1,48 +1,12 @@
-# Data schema (Metainfo)
+# Extending the Archive and Metainfo
 
-## Introduction
+In [Using the Archive and Metainfo](archive.md), we learned what the Archive and Metainfo
+are. It also demonstrated the Python interface and how to use it on Archive data. The
+Metainfo is written down in Python code as a bunch of classes that define *sections*
+and their properties. Here, we will look at how the Metainfo classes work and how
+the metainfo can be extended with new definitions.
 
-The NOMAD Metainfo stores descriptive and structured information about materials-science
-data contained in the NOMAD Archive. The Metainfo can be understood as the schema of
-the Archive. The NOMAD Archive data is
-structured to be independent of the electronic-structure theory code or molecular-simulation,
-(or beyond). The NOMAD Metainfo can be browsed as part of the [NOMAD Repository and Archive web application](https://nomad-lab.eu/prod/v1/gui/metainfo).
-
-Typically (meta-)data definitions are generated only for a predesigned and specific scientific field,
-application or code. In contrast, the NOMAD Metainfo considers all pertinent information
-in the input and output files of the supported electronic-structure theory, quantum chemistry,
-and molecular-dynamics (force-field) codes. This ensures a complete coverage of all
-material and molecule properties, even though some properties might not be as important as
-others, or are missing in some input/output files of electronic-structure programs.
-
-![archive example](assets/archive-example.png)
-
-NOMAD Metainfo is kept independent of the actual storage format and is not bound to any
-specific storage method. In our practical implementation, we use a binary form of JSON,
-called [msgpack](https://msgpack.org/) on our servers and provide Archive data as JSON via
-our API. For NOMAD end-users the internal storage format is of little relevance, because
-the archive data is solely served by NOMAD's API.
-
-The NOMAD Metainfo started within the [NOMAD Laboratory](https://nomad-lab.eu). It was discussed at the
-[CECAM workshop Towards a Common Format for Computational Materials Science Data](https://th.fhi-berlin.mpg.de/meetings/FCMSD2016)
-and is open to external contributions and extensions. More information can be found in
-[Towards a Common Format for Computational Materials Science Data (Psi-K 2016 Highlight)](http://th.fhi-berlin.mpg.de/site/uploads/Publications/Psik_Highlight_131-2016.pdf).
-
-
-## Metainfo Python Interface
-
-The NOMAD meta-info allows to define schemas for physics data independent of the used
-storage format. It allows to define physics quantities with types, complex shapes
-(vetors, matrices, etc.), units, links, and descriptions. It allows to organize large
-amounts of these quantities in containment hierarchies of extendable sections, references
-between sections, and additional quantity categories.
-
-NOMAD uses the meta-info to define all archive data, repository meta-data, (and encyclopedia
-data). The meta-info provides a convenient Python interface to create,
-manipulate, and access data. We also use it to map data to various storage formats,
-including JSON, (HDF5), mongodb, and elastic search.
-
-### Starting example
+## Starting example
 
 ```python
 from nomad.metainfo import MSection, Quantity, SubSection, Units
@@ -58,7 +22,8 @@ class System(MSection):
         A Defines the number of atoms in the system.
         ''')
 
-    atom_labels = Quantity(type=MEnum(ase.data.chemical_symbols), shape['n_atoms'])
+    atom_labels = Quantity(
+        type=MEnum(ase.data.chemical_symbols), shape['n_atoms'])
     atom_positions = Quantity(type=float, shape=['n_atoms', 3], unit=Units.m)
     simulation_cell = Quantity(type=float, shape=[3, 3], unit=Units.m)
     pbc = Quantity(type=bool, shape=[3])
@@ -122,7 +87,7 @@ This will serialize the data into JSON:
 }
 ```
 
-### Definitions and Instances
+## Definitions and Instances
 
 As you already saw in the example, we first need to define what data can look like (schema),
 before we can actually program with data. Because schema and data are often discussed in the
@@ -160,50 +125,52 @@ there is a generated object derived from :class:`Section` that is available via
 These Python classes that are used to represent metainfo definitions form an inheritance
 hierarchy to share common properties
 
-.. autoclass:: Definition
+- `name`, each definition has a name. This is typically defined from the corresponding
+Python. E.g. a sections class name, becomes the section name; a quantity gets the name
+from its Python property, etc.
+- `description`, each definition should have one. Either set it directly or sue *doc strings*
+- `links`, a list of useful internet references.
+- `more`, a dictionary of custom information. All additional `kwargs` that are set when
+creating a definition are added to `more`.
 
+### Sections
 
-### Quantities
+Sections are defined with Python classes that extend `MSection` (or other section classes)
 
+- `base_sections` are automatically taken from the Python class's base classes.
+- `extends_base_section` a boolean that determines the inheritance. If this is `False`,
+normal Python inheritance implies and this section will inherit all properties (sub-sections,
+quantities) from all base classes. If this is `True`, all definition in this section
+will be added to the properties of the base class section. This allows to extend existing
+sections with additional properties.
 
+### Quantities
 Quantity definitions are the main building block of meta-info schemas. Each quantity
-represents a single piece of data.
-
-.. autoclass:: Quantity
-
-
-### Sections and Sub-Sections
-
-.. _metainfo-sections:
-
-
-The NOMAD Metainfo allows to create hierarchical (meta-)data structures. A hierarchy
-is basically a tree, where *sections* make up the root and inner nodes of the tree,
-and *quantities* are the leaves of the tree. In this sense a section can hold further
-sub-sections (more branches of the tree) and quantities (leaves). We say a section can
-have two types of *properties*: *sub-sections* and *quantities*.
-
-There is a clear distinction between *section* and *sub-section*. The term *section*
-refers to an object that holds data (*properties*) and the term *sub-section* refers to a
-relation between *sections*, i.e. between the containing section and the contained
-sections of the same type. Furthermore, we have to distinguish between *sections* and
-*section definitions*, as well as *sub-sections* and *sub-section definitions*. A *section
-definition* defines the possible properties of its instances, and an instantiating *section*
-can store quantities and sub-sections according to its definition. A *sub-section*
-definition defines a principle containment relationship between two *section definitions*
-and a *sub-section* is a set of *sections* (contents) contained in another *section* (container).
-*Sub-section definitions* are parts (*properties*) of the definition of containing sections.
-
-.. autoclass:: Section
-
-.. autoclass:: SubSection
-
-.. _metainfo-categories:
-
+represents a single piece of data. Quantities can define:
+
+- A `type`, this can be a primitive Python type (`str`, `int`, `bool`), a numpy
+data type (`np.dtype('float64')`), an `MEnum('item1', ..., 'itemN')`, a predefined
+metainfo type (`Datetime`, `JSON`, `File`, ...), or another section or quantity to define
+a reference type.
+- A `shape` that defines the dimensionality of the quantity. Examples are: `[]` (number),
+`['*']` (list), `[3, 3]` (3 by 3 matrix), `['n_elements']` (a vector of length defined by
+another quantity `n_elements`).
+- A physics `unit`. We use [Pint](https://pint.readthedocs.io/en/stable/) here. You can
+use unit strings that are parsed by Pint, e.g. `meter`, `m`, `m/s^2`. As a convention the
+metainfo uses only SI units.
+
+### Sub-Section
+A sub-section defines a named property of a section that refers to another section. It
+allows to define that a section can contain another section.
+
+- `sub_section` (aliases `section_def`, `sub_section_def`) defines the section that can
+be contained.
+- `repeats` is a boolean that determines if the sub-section relationship allows for multiple
+or only one section.
 
 ### References and Proxies
 
-Beside creating hierarchies (e.g. tree structures) with `SubSection`, the metainfo
+Beside creating hierarchies (e.g. tree structures) with sub sections, the metainfo
 also allows to create cross references between sections and other sections or quantity
 values:
 
@@ -230,11 +197,16 @@ the references value. Internally, we do not store the values, but a reference to
 section that holds the referenced quantity. Therefore, when you want to
 assign a value reference, you use the section with the quantity and not the value itself.
 
-Currently this metainfo implementation only supports references within a single
-section hierarchy (e.g. the same JSON file). References are stored as paths from the
-root section, over sub-sections, to the referenced section or quantity value. Each path segment is
-the name of the sub-section or an index in a repeatable sub-section:
-`/system/0` or `/system/0/atom_labels`.
+References are serialized as URLs. There are different types of reference URLs:
+
+- `#/run/0/calculation/1`, a reference in the same Archive
+- `/run/0/calculation/1`, a reference in the same archive (legacy version)
+- `../upload/archive/mainfile/{mainfile}#/run/0`, a reference into an Archive of the same upload
+- `/entries/{entry_id}/archive#/run/0/calculation/1`, a reference into the Archive of a different entry on the same NOMAD installation
+- `/uploads/{upload_id}/entries/{entry_id}/archive#/run/0/calculation/1`, similar but based on uploads
+- `https://myoasis.de/api/v1/uploads/{upload_id}/entries/{entry_id}/archive#/run/0/calculation/1`, a global reference towards a different NOMAD installation (Oasis)
+
+The host and path parts of URLs correspond with the NOMAD API. The anchors are paths from the root section of an Archive, over its sub-sections, to the referenced section or quantity value. Each path segment is the name of the sub-section or an index in a repeatable sub-section: `/system/0` or `/system/0/atom_labels`.
 
 References are automatically serialized by :py:meth:`MSection.m_to_dict`. When de-serializing
 data with :py:meth:`MSection.m_from_dict` these references are not resolved right away,
@@ -242,21 +214,18 @@ because the references section might not yet be available. Instead references ar
 as :class:`MProxy` instances. These objects are automatically replaced by the referenced
 object when a respective quantity is accessed.
 
-.. autoclass:: MProxy
-
 If you want to defined references, it might not be possible to define the referenced
 section or quantity before hand, due to how Python definitions and imports work. In these
-cases, you can use a proxy to reference the reference type:
+cases, you can use a proxy to reference the reference type. There is a special proxy
+implementation for sections:
 
 ```python
 class Calculation(MSection):
-    system = Quantity(type=MProxy('System')
-    atom_labels = Quantity(type=MProxy('System/atom_labels')
+    system = Quantity(type=SectionProxy('System')
 ```
 
-The strings given to `MProxy` are paths within the available definitions. The above example
-works, if `System` and `System/atom_labels` are eventually defined in the same package.
-
+The strings given to `SectionProxy` are paths within the available definitions.
+The above example works, if `System` is eventually defined in the same package.
 
 ### Categories
 
@@ -274,80 +243,14 @@ class CategoryName(MCategory):
 
 ### Packages
 
-.. autoclass:: Package
-
-.. _metainfo-custom-types:
-
-
-### Custom data types
-
-.. autoclass:: DataType
-    :members:
-
-.. autoclass:: MEnum
-
-
-### Reflection and custom data storage
-
-When manipulating metainfo data in Python, all data is represented as Python objects, where
-objects correspond to `section instance` and their attributes to `quantity values` or
-`section instances` of sub-sections. By defining sections with `section classes` each
-of these Python objects already has an interface that allows to get/set quantities and
-sub-sections. But often this interface is too limited, or the specific section and
-quantity definitions are unknown when writing code.
-
-.. autoclass:: MSection
-    :members:
-
-.. autoclass:: MetainfoError
-.. autoclass:: DeriveError
-.. autoclass:: MetainfoReferenceError
-
-.. _metainfo-urls:
-
-### Context
-
-.. autoclass:: MResource
-
-### A more complex example
-
-.. literalinclude:: ../nomad/metainfo/example.py
-    :language: python
-
-## Accessing the Metainfo
-
-Above you learned what the metainfo is and how to create metainfo definitions and work
-with metainfo data in Python. But how do you get access to the existing metainfo definitions
-within NOMAD? We call the complete set of all metainfo definitions the *NOMAD Metainfo*.
-
-This *NOMAD Metainfo* comprises definitions from various packages defined by all the
-parsers and converters (and respective code outputs and formats) that NOMAD supports. In
-addition there are *common* packages that contain definitions that might be relevant to
-different kinds of archive data.
-
-### Python
-
-In the NOMAD source-code all metainfo definitions are materialized as Python source files
-that contain the definitions in the format described above. If you have installed the
-NOMAD Python package (see :ref:`install-client`), you can simply import the respective
-Python modules:
-
+Metainfo packages correspond with Python packages. Typically you want your metainfo
+Python files follow this pattern:
 ```python
-from nomad.datamodel.metainfo.public import m_package
-print(m_package.m_to_json(indent=2))
-
-from nomad.datamodel.metainfo.public import section_run
-my_run = section_run()
-```
-
+from nomad.metainfo import Package
 
-Many more examples about how to read the NOMAD Metainfo programmatically can be found
-[here](https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/tree/master/examples/access_metainfo.py).
+m_package = Package()
 
-### API
+# Your section classes and categories
 
-In addition, a JSON version of the NOMAD Metainfo is available through our API via the
-`metainfo` endpoint.
-You can get :api:`one giant JSON with all definitions <metainfo/>`, or you
-can access the metainfo for specific packages, e.g. the :api:`VASP metainfo <metainfo/vasp.json>`. The
-returned JSON will also contain all packages that the requested package depends on.
+m_package.__init_metainfo__()
+```
diff --git a/docs/oasis.md b/docs/oasis.md
index 61043623de098f9ef474d93bf4a5a5ace8bf653a..6cded45bcc4aa379031cad0d333c08ffd5aa4c01 100644
--- a/docs/oasis.md
+++ b/docs/oasis.md
@@ -14,7 +14,14 @@ docker with docker-compose, and in a kubernetes cluster. We recommend using dock
 
 ## Before you start
 
+### Choosing hardware and installation type
+
+Comming soon ...
+
+### Using the central user management
+
 We need to register some information about your OASIS in the central user management:
+
 - The hostname for the machine you run NOMAD on. This is important for redirects between
 your OASIS and the central NOMAD user-management and to allow your users to upload files (via GUI or API).
 Your machine needs to be accessible under this hostname from the public internet. The host
@@ -32,7 +39,7 @@ the right information.
 
 In principle, you can also run your own user management. This is not yet documented.
 
-## docker and docker compose
+## Docker and docker compose
 
 ### Pre-requisites
 
@@ -54,6 +61,7 @@ Further, we will need to mount some configuration files to configure the NOMAD s
 their respective containers.
 
 There are three files to configure:
+
 - `docker-compose.yaml`
 - `nomad.yaml`
 - `nginx.conf`
@@ -62,7 +70,7 @@ In this example, we have all files in the same directory (the directory we also
 You can download examples files from
 [here](https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/tree/master/ops/docker-compose/nomad-oasis/).
 
-### Docker compose
+### docker-compose.yaml
 
 The most basic `docker-compose.yaml` to run an OASIS looks like this:
 
@@ -167,6 +175,7 @@ volumes:
 There are no mandatory changes necessary.
 
 A few things to notice:
+
 - All services use docker volumes for storage. This could be changed to host mounts.
 - It mounts three configuration files that need to be provided (see below): `nomad.yaml`, `nginx.conf`, `env.js`.
 - The only exposed port is `80`. This could be changed to a desired port if necessary.
@@ -208,6 +217,7 @@ elastic:
 ```
 
 You need to change:
+
 - Replace `your-host` and admin credentials respectively.
 - `api_base_path` defines the path under with the app is run. It needs to be changed, if you use a different base path.
 - The admin user credentials (id, username, password, email).
@@ -291,7 +301,7 @@ Gunicorn is the WSGI-server that runs the nomad app. Consult the
 [gunicorn documentation](https://docs.gunicorn.org/en/stable/configure.html) for
 configuration options.
 
-### Starting and stopping NOMAD
+### Running NOMAD
 
 If you prepared the above files, simply use the usual `docker-compose` commands to start everything.
 
@@ -343,11 +353,12 @@ docker exec -ti nomad_oasis_app /bin/bash
 ```
 
 If you want to report problems with your OASIS. Please provide the logs for
+
 - nomad_oasis_app
 - nomad_oasis_worker
 - nomad_oasis_gui
 
-## base linux (without docker)
+## Base Linux (without docker)
 
 ### Pre-requisites
 
@@ -384,7 +395,7 @@ curl "https://nomad-lab.eu/prod/rae/beta/dist/nomad-lab.tar.gz" -o nomad-lab.tar
 pip install nomad-lab.tar.gz[all]
 ```
 
-### Configure NOMAD - nomad.yaml
+### nomad.yaml
 
 The `nomad.yaml` is our central config file. You should write a `nomad.yaml` like this:
 
@@ -410,13 +421,14 @@ elastic:
 ```
 
 You need to change:
+
 - Replace `your-host` and admin credentials respectively.
 - `api_base_path` defines the path under with the app is run. It needs to be changed, if you use a different base path.
 
 A few things to notice:
 - Be secretive about your admin credentials; make sure this file is not publicly readable.
 
-### Configure NOMAD - nginx
+### nginx
 
 You can generate a suitable `nginx.conf` with the `nomad` command line command that
 comes with the *nomad-lab* Python package. If you server other content but NOMAD with
@@ -455,17 +467,19 @@ celery worker -l info -A nomad.processing -Q celery,calcs,uploads
 
 This should give you a working OASIS at `http://<your-host>/<your-path-prefix>`.
 
-## kubernetes
+## Kubernetes
 
 *This is not yet documented.*
 
-## Migrating from an older version (0.7.x to 0.8.x)
+## Migrating from an older version (0.8.x to 1.x)
 
-*This documentation is outdated and will be removed in future releases.*
-
-Between versions 0.7.x and 0.8.x we needed to change how archive and metadata data is stored
+Between versions 0.10.x and 1.x we needed to change how archive and metadata data is stored
 internally in files and databases. This means you cannot simply start a new version of
-NOMAD on top of the old data. But there is a strategy to adapt the data.
+NOMAD on top of the old data. But there is a strategy to adapt the data. This should
+work for data based on NOMAD >0.8.0 and <= 0.10.x.
+
+The overall strategy is to create a new mongo database, copy all information, and then
+reprocess all data for the new version.
 
 First, shutdown the OASIS and remove all old container.
 ```sh
@@ -474,9 +488,9 @@ docker-compose rm -f
 ```
 
 Update you config files (`docker-compose.yaml`, `nomad.yaml`, `nginx.conf`) according
-to the latest documentation (see above). Especially make sure to select a new name for
-databases and search index in your `nomad.yaml` (e.g. `nomad_v0_8`). This will
-allow us to create new data while retaining the old, i.e. to copy the old data over.
+to the latest documentation (see above). Make sure to use index and database names that
+are different. The default values contain a version number in those names, if you don't
+overwrite those defaults, you should be save.
 
 Make sure you get the latest images and start the OASIS with the new version of NOMAD:
 ```sh
@@ -489,12 +503,13 @@ because we are using a different database and search index now.
 
 To migrate the data, we created a command that you can run within your OASIS' NOMAD
 application container. This command takes the old database name as argument, it will copy
-all data over and then reprocess the data to create data in the new archive format and
-populate the search index. The default database name in version 0.7.x installations was `nomad_fairdi`.
-Be patient.
+all data from the old mongo db to the current one. The default v8.x database name
+was `nomad_fairdi`, but you might have changed this to `nomad_v0_8` as recommended by
+our old Oasis documentation.
 
 ```sh
-docker exec -ti nomad_oasis_app bash -c 'nomad admin migrate --mongo-db nomad_fairdi'
+docker exec -ti nomad_oasis_app bash -c 'nomad admin upgrade migrate-mongo --src-db-name nomad_v0_8'
+docker exec -ti nomad_oasis_app bash -c 'nomad admin uploads reprocess'
 ```
 
 Now all your data should appear in your OASIS again. If you like, you can remove the
@@ -539,11 +554,12 @@ In addition, each worker process might spawn additional threads and consume
 more than one CPU core.
 
 There are multiple ways to restrict the resources that the worker might consume:
+
 - limit the number of worker processes and thereby lower the number of used cores
 - disable or limit multi-threading
 - limit available CPU utilization of the worker's docker container with docker
 
-### Limit the number of worker processes
+#### Limit the number of worker processes
 
 The worker uses the Python package celery. Celery can be configured to use less than the
 default number of worker processes (which equals the number of available cores). To use just
@@ -556,7 +572,7 @@ command: python -m celery worker -l info -A nomad.processing --concurrency=1 -Q
 
 See also the [celery documentation](https://docs.celeryproject.org/en/stable/userguide/workers.html#id1).
 
-### Limiting the use of threads
+#### Limiting the use of threads
 
 You can also reduce the usable threads that Python packages based on OpenMP might use to
 reduce the threads that might be spawn by a single worker process. Simply set the `OMP_NUM_THREADS`
@@ -571,7 +587,7 @@ services:
             OMP_NUM_THREADS: 1
 ```
 
-### Limit CPU with docker
+#### Limit CPU with docker
 
 You can add a `deploy.resources.limits` section to the worker service in the `docker-compose.yml`:
 
@@ -591,12 +607,12 @@ See also the [docker-compose documentation](https://docs.docker.com/compose/comp
 
 ## NOMAD Oasis FAQ
 
-### Why use an Oasis?
+#### Why use an Oasis?
 There are three reasons: You want to manage data in private, you want local availability of public NOMAD data without network restrictions, or you want to manage large amounts of low quality and preliminary data that is not intended for publication.
 
-### How to organize data in NOMAD Oasis?
+#### How to organize data in NOMAD Oasis?
 
-#### How can I categorize or label my data?
+##### How can I categorize or label my data?
 Currently, NOMAD supports the following mechanism to organize data:
 data always belongs to one upload, which has a main author, and is assigned various metadata, like upload create time, a custom, human friendly upload name, etc.
 data can be assigned to multiple independent datasets
@@ -604,39 +620,39 @@ data can hold a proprietary id called “external_id”
 data can be assigned multiple coauthors in addition to the main author
 Data can be filtered and searched in various ways.
 
-####  Is there some rights-based visibility?
+#####  Is there some rights-based visibility?
 An upload can be viewed at any time by the main author, the upload coauthors, and the
 upload reviewers. Other users will not be able to see the upload until it is published and
 the embargo (if requested) has been lifted. The main author decides which users to register as coauthors and reviewers, when to publish and if it should be published with an embargo.
 The main author and upload coauthors are the only users who can edit the upload while it is in staging.
 
-### How to share data with the central NOMAD?
+#### How to share data with the central NOMAD?
 
 Keep in mind, it is not entirely clear, how we will do this.
 
-#### How to designate Oasis data for publishing to NOMAD?
+##### How to designate Oasis data for publishing to NOMAD?
 Currently, you should use one of the organizational mechanism to designate data for being published to NOMAD. we, you can use a dedicated dataset for publishable data.
 
-#### How to upload?
+##### How to upload?
 
 Will will probably provide functionality in the API of the central NOMAD to upload data from an Oasis to the central NOMAD. We will provide the necessary scripts and detailed instructions. Most likely the data that is uploaded to the central NOMAD can be selected via a search query. Therefore, using a dedicated dataset, would be an easy to select criteria.
 
-### How to maintain an Oasis installation?
+#### How to maintain an Oasis installation?
 
-#### How to install a NOMAD Oasis?
+##### How to install a NOMAD Oasis?
 Follow our guide: https://nomad-lab.eu/prod/rae/docs/ops.html#operating-a-nomad-oasis
 
-#### How do version numbers work?
+##### How do version numbers work?
 There are still a lot of thing in NOMAD that are subject to change. Currently, changes in the minor version number (0.x.0) designate major changes that require data migration. Changes in the patch version number (0.7.x) just contain minor changes and fixes and do not require data migration. Once we reach 1.0.0, NOMAD will use the regular semantic versioning conventions.
 
-#### How to upgrade a NOMAD Oasis?
+##### How to upgrade a NOMAD Oasis?
 When we release a new version of the NOMAD software, it will be available as a new Docker image with an increased version number. You simply change the version number in your docker-compose.yaml and restart.
 
-#### What about major releases?
+##### What about major releases?
 Going from NOMAD 0.7.x to 0.8.x will require data migration. This means the layout of the data has changed and the new version cannot be used on top of the old data. This requires a separate installation of the new version and mirroring the data from the old version via NOMAD’s API. Detailed instructions will be made available with the new version.
 
-#### How to move data between installations?
+##### How to move data between installations?
 We the release of 0.8.x, we will clarify and how to move data between installations. (See last question)
 
-#### How to backup my Oasis?
+##### How to backup my Oasis?
 To backup your Oasis at least the file data and mongodb data needs to be backed up. You determined the path to your file data (your uploads) during the installation. This directory can be backed up like any other file backup (e.g. rsync). To backup the mongodb, please refer to the official mongodb documentation: https://docs.mongodb.com/manual/core/backups/. We suggest a simple mongodump export that is backed up alongside your files. The elasticsearch contents can be reproduced with the information in files and mongodb.
diff --git a/docs/pythonlib.md b/docs/pythonlib.md
index 9ce5ec085ad7d9b84820450fce4b933d20fa3221..dffb5c53c9e0bdaeac612e00b67345dbc29ec04f 100644
--- a/docs/pythonlib.md
+++ b/docs/pythonlib.md
@@ -11,8 +11,9 @@ To install the latest stable pypi release, simply use pip:
 pip install nomad-lab
 ```
 
-Since NOMAD v1 is still in beta, this will still give you the Python library for
-the NOMAD v0.10.x version.
+!!! attention
+    Since NOMAD v1 is still in beta, there is no official pypi release. `pip install` will
+    still give you the Python library for the NOMAD v0.10.x version.
 
 To install the latest release developer release (e.g. v1) from our servers use:
 ```sh
@@ -46,7 +47,7 @@ The various extras have the following meaning:
 
 The `ArchiveQuery` allows you to search for entries and access their parsed *archive* data
 at the same time. Furthermore, all data is accessible through a convenient Python interface
-based on the [NOMAD metainfo](metainfo.html) rather than plain JSON.
+based on the [NOMAD metainfo](metainfo.md) rather than plain JSON.
 
 Here is an example:
 ```py
@@ -134,13 +135,13 @@ O8Ca2Ti4: -116.52240913000001 electron_volt
 
 Let's discuss the used `ArchiveQuery` parameters:
 
-- `query`, this is an arbitrary API query as discussed in the under [Queries in the API section](api.html#queries).
+- `query`, this is an arbitrary API query as discussed in the under [Queries in the API section](api.md#queries).
 - `required`, this optional parameter allows you to specify what parts of an archive you need. This is also
-described in under [Access archives in API section](api.html#access-archives).
+described in under [Access archives in API section](api.md#access-archives).
 - `per_page`, with this optional parameter you can determine, how many results are downloaded at a time. For bulk downloading many results, we recommend ~100. If you are just interested in the first results a lower number might increase performance.
 - `max`, with this optional parameter, we limit the maximum amount of entries that are downloaded, just to avoid accidentally iterating through a result set of unknown and potentially large size.
 - `owner` and `auth`, allows you to access private data or specify you only want to
-query your data. See also [owner](api.html#owner) and [auth](api.html#authentication) in the API section. Her is an example with authentication:
+query your data. See also [owner](api.md#owner) and [auth](api.md#authentication) in the API section. Her is an example with authentication:
 ```py
 from nomad.client import ArchiveQuery, Auth
 
diff --git a/docs/web.md b/docs/web.md
index 5caa2473c9c661a349c5fa4c94555cb2a592c372..f9419b6ac5695ae54b5f23851c72407b9e3bc5b1 100644
--- a/docs/web.md
+++ b/docs/web.md
@@ -16,7 +16,7 @@ This will come in February 2022.
 ### Preparing files
 
 You can upload files one by one, but you can also provider larger `.zip` or `.tar.gz`
-archive files, if this is easier to you. Also the file upload form the command line with
+archive files, if this is easier to you. Also the file upload frp, the command line with
 curl or wget, is based an archive files. The specific layout of these files is up to you.
 NOMAD will simply extract them and consider the whole directory structure within.
 
diff --git a/examples/metainfo/metainfo.ipynb b/examples/metainfo/metainfo.ipynb
deleted file mode 100644
index 641ae731137480232ce126ec9009693cc82d979e..0000000000000000000000000000000000000000
--- a/examples/metainfo/metainfo.ipynb
+++ /dev/null
@@ -1,313 +0,0 @@
-{
- "cells": [
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "# NOMAD Metainfo 2.0 demonstration\n",
-    "\n",
-    "You can find more complete documentation [here](https://labdev-nomad.esc.rzg.mpg.de/fairdi/nomad/testing/docs/metainfo.html)"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": 2,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "from nomad.metainfo import MSection, SubSection, Quantity, Datetime, units\n",
-    "import numpy as np\n",
-    "import datetime"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "## Sections and quantities\n",
-    "\n",
-    "To define sections and their quantities, we use Python classes and attributes. Quantities have *type*, *shape*, and *unit*."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": 3,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "class System(MSection):\n",
-    "    \"\"\" The simulated system \"\"\"\n",
-    "    number_of_atoms = Quantity(type=int, derived=lambda system: len(system.atom_labels))\n",
-    "    atom_labels = Quantity(type=str, shape=['number_of_atoms'])\n",
-    "    atom_positions = Quantity(type=np.dtype(np.float64), shape=['number_of_atoms', 3], unit=units.m)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Such *section classes* can then be instantiated like regular Python classes. Respectively, *section instances* are just regular Python object and section quantities can be get and set like regular Python object attributes."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": 4,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "system = System()\n",
-    "system.atom_labels = ['H', 'H', '0']\n",
-    "system.atom_positions = np.array([[6, 0, 0], [0, 0, 0], [3, 2, 0]]) * units.angstrom"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Of course the metainfo is not just about dealing with physics data in Python. Its also about storing and managing data in various fileformats and databases. Therefore, the created data can be serialized, e.g. to JSON. All *section \n",
-    "instances* have a set of additional `m_`-methods that provide addtional functions. Note the unit conversion."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": 5,
-   "metadata": {},
-   "outputs": [
-    {
-     "data": {
-      "text/plain": [
-       "'{\"atom_labels\": [\"H\", \"H\", \"0\"], \"atom_positions\": [[6e-10, 0.0, 0.0], [0.0, 0.0, 0.0], [3e-10, 2e-10, 0.0]]}'"
-      ]
-     },
-     "execution_count": 5,
-     "metadata": {},
-     "output_type": "execute_result"
-    }
-   ],
-   "source": [
-    "system.m_to_json()"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "## Sub-sections to form hiearchies of data\n",
-    "\n",
-    "*Section instances* can be nested to form data hierarchies. To achive this, we first have to create *section \n",
-    "definitions* that have sub-sections."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": 6,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "class Run(MSection):\n",
-    "    timestamp = Quantity(type=Datetime, description='The time that this run was conducted.')\n",
-    "    systems = SubSection(sub_section=System, repeats=True)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Now we can add *section instances* for `System` to *instances* of `Run`."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": 7,
-   "metadata": {},
-   "outputs": [
-    {
-     "data": {
-      "text/plain": [
-       "'{\"timestamp\": \"2019-10-09T14:48:43.663363\", \"systems\": [{\"atom_labels\": [\"H\", \"H\", \"0\"], \"atom_positions\": [[6e-10, 0.0, 0.0], [0.0, 0.0, 0.0], [3e-10, 2e-10, 0.0]]}, {\"atom_labels\": [\"H\", \"H\", \"0\"], \"atom_positions\": [[5e-10, 0.0, 0.0], [0.0, 0.0, 0.0], [2.5e-10, 2e-10, 0.0]]}]}'"
-      ]
-     },
-     "execution_count": 7,
-     "metadata": {},
-     "output_type": "execute_result"
-    }
-   ],
-   "source": [
-    "run = Run()\n",
-    "run.timestamp = datetime.datetime.now()\n",
-    "\n",
-    "system = run.m_create(System)\n",
-    "system.atom_labels = ['H', 'H', '0']\n",
-    "system.atom_positions = np.array([[6, 0, 0], [0, 0, 0], [3, 2, 0]]) * units.angstrom\n",
-    "\n",
-    "system = run.m_create(System)\n",
-    "system.atom_labels = ['H', 'H', '0']\n",
-    "system.atom_positions = np.array([[5, 0, 0], [0, 0, 0], [2.5, 2, 0]]) * units.angstrom\n",
-    "run.m_to_json()"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "The whole data hiearchy can be navigated with regular Python object/attribute style programming and values can be\n",
-    "used for calculations as usual."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": 8,
-   "metadata": {},
-   "outputs": [
-    {
-     "data": {
-      "text/html": [
-       "[[-1.   0.   0. ] [ 0.   0.   0. ] [-0.5  0.   0. ]] angstrom"
-      ],
-      "text/latex": [
-       "$[[-1.   0.   0. ] [ 0.   0.   0. ] [-0.5  0.   0. ]] angstrom$"
-      ],
-      "text/plain": [
-       "<Quantity([[-1.   0.   0. ]\n",
-       " [ 0.   0.   0. ]\n",
-       " [-0.5  0.   0. ]], 'angstrom')>"
-      ]
-     },
-     "execution_count": 8,
-     "metadata": {},
-     "output_type": "execute_result"
-    }
-   ],
-   "source": [
-    "(run.systems[1].atom_positions - run.systems[0].atom_positions).to(units.angstrom)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "## Reflection, inspection, and code-completion\n",
-    "\n",
-    "Since all definitions are available as *section classes*, Python already knows about all possible quantities. We can \n",
-    "use this in Python notebooks, via *tab* or the `?`-operator. Furthermore, you can access the *section definition* of all *section instances* with `m_def`. Since a *section defintion* itself is just a piece of metainfo data, you can use it to programatically explore the definition itselve."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": 9,
-   "metadata": {},
-   "outputs": [
-    {
-     "data": {
-      "text/plain": [
-       "[number_of_atoms:Quantity, atom_labels:Quantity, atom_positions:Quantity]"
-      ]
-     },
-     "execution_count": 9,
-     "metadata": {},
-     "output_type": "execute_result"
-    }
-   ],
-   "source": [
-    "run.systems[0].m_def.quantities"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": 10,
-   "metadata": {},
-   "outputs": [
-    {
-     "data": {
-      "text/plain": [
-       "'The time that this run was conducted.'"
-      ]
-     },
-     "execution_count": 10,
-     "metadata": {},
-     "output_type": "execute_result"
-    }
-   ],
-   "source": [
-    "run.m_def.all_quantities['timestamp'].description"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": 11,
-   "metadata": {},
-   "outputs": [
-    {
-     "data": {
-      "text/plain": [
-       "['number_of_atoms']"
-      ]
-     },
-     "execution_count": 11,
-     "metadata": {},
-     "output_type": "execute_result"
-    }
-   ],
-   "source": [
-    "System.atom_labels.shape"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": 32,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "t = np.dtype(np.i64)"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": 33,
-   "metadata": {},
-   "outputs": [
-    {
-     "data": {
-      "text/plain": [
-       "numpy.int64"
-      ]
-     },
-     "execution_count": 33,
-     "metadata": {},
-     "output_type": "execute_result"
-    }
-   ],
-   "source": [
-    "t.type"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": []
-  }
- ],
- "metadata": {
-  "kernelspec": {
-   "display_name": ".venv",
-   "language": "python",
-   "name": ".venv"
-  },
-  "language_info": {
-   "codemirror_mode": {
-    "name": "ipython",
-    "version": 3
-   },
-   "file_extension": ".py",
-   "mimetype": "text/x-python",
-   "name": "python",
-   "nbconvert_exporter": "python",
-   "pygments_lexer": "ipython3",
-   "version": "3.6.3"
-  }
- },
- "nbformat": 4,
- "nbformat_minor": 2
-}
diff --git a/mkdocs.yml b/mkdocs.yml
index b2fd8274e6842a905b0547b6044bcb86309b03bd..2ab645e218959fbf01b690e7e7dd49e2c4665345 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -7,6 +7,7 @@ nav:
   - web.md
   - api.md
   - pythonlib.md
+  - archive.md
   # - Using the AI Toolkit and other remote tools: aitoolkit.md
   - Extending and Developing NOMAD:
     - developers.md
@@ -25,6 +26,10 @@ theme:
   favicon: assets/favicon-hres.png
 # repo_url: https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/
 markdown_extensions:
+  - attr_list
+  - md_in_html
+  - admonition
+  - pymdownx.details
   - pymdownx.highlight:
       anchor_linenums: true
   - pymdownx.inlinehilite
diff --git a/nomad/config.py b/nomad/config.py
index 1480f1a6c965b825394642b4b676c0afb5984125..b62666af6bd5dc5d444d498c423bbb803af2021d 100644
--- a/nomad/config.py
+++ b/nomad/config.py
@@ -128,7 +128,7 @@ fs = NomadConfig(
     public='.volumes/fs/public',
     local_tmp='/tmp',
     prefix_size=2,
-    archive_version_suffix=None,
+    archive_version_suffix='v1',
     working_directory=os.getcwd()
 )
 
@@ -155,7 +155,7 @@ keycloak = NomadConfig(
 mongo = NomadConfig(
     host='localhost',
     port=27017,
-    db_name='nomad_fairdi'
+    db_name='nomad_v1'
 )
 
 logstash = NomadConfig(