diff --git a/cmlkit.ipynb b/cmlkit.ipynb
index 207259da9915fab8ef50ca41cfdbc2433cb3df75..442dcf41e9938199882ffc58a16ce157fc9b8b55 100644
--- a/cmlkit.ipynb
+++ b/cmlkit.ipynb
@@ -20,1185 +20,8 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "**Related publication: [Langer, Goeßmann, Rupp (2020)](http://marcel.science/repbench/)**\n",
-    "\n",
-    "***\n",
-    "\n",
-    "Hello! 👋 Welcome to the [`cmlkit` 🐫🧰](https://marcel.science/cmlkit) tutorial. This tutorial will introduce you to the `cmlkit` python package, from its conceptual foundations and architecture, to its hyper-parameter tuning module, and finally concluding with an application: We will develop a machine learning model to [predict formation energies of candidate materials for transparent conducting oxides](https://www.kaggle.com/c/nomad2018-predict-transparent-conductors) (Paper: [Sutton *et al.* (2019)](https://doi.org/10.1038/s41524-019-0239-3)). After completing this tutorial, you should be able to use `cmlkit` as a basis for your own experiments, and will have a solid understanding of stochastic, parallel hyper-parameter optimisation as implemented in `cmlkit.tune`.\n",
-    "\n",
-    "### Prerequisites\n",
-    "\n",
-    "You will get the most out of this tutorial if you:\n",
-    "\n",
-    "- Are familiar with Python 3,\n",
-    "- Know a little bit about chemistry and/or physics,\n",
-    "- Know roughly how kernel ridge regression works, and know a bit about machine learning in general.\n",
-    "\n",
-    "For an introduction to kernel ridge regression, you can take a look at [this tutorial](https://analytics-toolkit.nomad-coe.eu/public/user-redirect/notebooks/tutorials/krr4mat.ipynb), which contains everything you need to know!\n",
-    "\n",
-    "The contents of this tutorial will mostly be of interest for people researching the application of machine learning models to computational chemistry and computational condensed matter physics, and in particularly those interested in building computational experiments and toolkits in that domain.\n",
-    "\n",
-    "I'll use the following abbreviations and terms throughout:\n",
-    "\n",
-    "- \"Representation\": A way to transform the coordinates and atomic numbers (i.e. the \"structure\") of a molecule or material into a vector.\n",
-    "- \"ML\": Machine learning, here, we can basically substitute it with \"fitting\" or \"interpolation\".\n",
-    "- \"KRR\": Kernel ridge regression, a regression (i.e. \"fitting\") method.\n",
-    "- \"HP\": Hyper-parameters. These are the \"free\" parameters of a ML model, which aren't directly determined by training.\n",
-    "- \"SOAP\": Smooth Overlap of Atomic Positions representation ([Bartók, Kondor, Csányi (2013)](https://doi.org/10.1103/PhysRevB.87.184115)).\n",
-    "- \"MBTR\": Many-Body Tensor Representation [(Huo, Rupp (2017))](https://arxiv.org/abs/1704.06439).\n",
-    "- \"System\" or \"structure\": Either a molecule, other finite system, or a periodic system.\n",
-    "\n",
-    "🏁 Let's get started. 🏁\n",
-    "\n",
-    "## What is `cmlkit`?\n",
-    "\n",
-    "According to the [Github repository](https://github.com/sirmarcel/cmlkit/):\n",
-    "\n",
-    "> `cmlkit` is an extensible `python` package providing clean and concise infrastructure to specify, tune, and evaluate machine learning models for computational chemistry and condensed matter physics. Intended as a common foundation for more specialised systems, not a monolithic user-facing tool, it wants to help you build your own tools! ✨\n",
-    "\n",
-    "In this tutorial, I will give an overview of the things `cmlkit` can do, and hopefully make a case for why it is useful to you as a base to build your own projects on. (You can find a more compact feature list on the Github page, if that's more your kind of thing. This is the scenic route. 🌆)\n",
-    "\n",
-    "At this point, it's most useful if we go on a little `cmlkit` tour. Let's saddle up! 🐫\n",
-    "\n",
-    "## A Tour of `cmlkit`\n",
-    "\n",
-    "Conceptually, the \"core\" of `cmlkit` consists of:\n",
-    "\n",
-    "1. `Component` class and sub-classes that define how parts of ML models should be implemented, an\n",
-    "2. Engine that defines a way to specify parametrisations of these components in a special format (\"configs\"), and a\n",
-    "3. Plugin system to easily extend this architecture with custom components.\n",
-    "\n",
-    "On top of this, `cmlkit` then provides:\n",
-    "\n",
-    "1. Reference interfaces to commonly used representations of molecules and materials,\n",
-    "2. Reference interface to `qmmlpack` for fast kernel ridge regression, and finally\n",
-    "3. Tools to combine these into ML models.\n",
-    "\n",
-    "Additionally, `cmlkit` also contains a hyper-parameter tuning engine based on `hyperopt`.\n",
-    "\n",
-    "\n",
-    "\n",
-    "### Core 🌱\n",
-    "\n",
-    "#### Components\n",
-    "\n",
-    "In `cmlkit`, a `Component` is an object that:\n",
-    "\n",
-    "- Needs to be instantiated with a lot of complex parameters,\n",
-    "- Has little to no internal state, (i.e. once you create it, it can't change)\n",
-    "- Does one thing, deterministically (i.e. given the same input, it always produces the same output).\n",
-    "\n",
-    "In most cases, we can simply think of a `Component` as a function with a lot of parameters. Something along the lines of:\n",
-    "\n",
-    "```python\n",
-    "f(data, lots, of, other, parameters, ...)\n",
-    "```\n",
-    "\n",
-    "Instead of always passing around these parameters, we create the component `c` with these parameters, and then simply call `c(data)` instead of an extremely long, unwieldy expression. (For the technically-minded: This is basically a way of creating [partials](https://docs.python.org/3.7/library/functools.html). Components are mostly \"fancy functions\".)\n",
-    "\n",
-    "This is useful because the parts of a ML model can often be described in this way. For example, a *representation* is simply transformation of a set of coordinates into a vector, with a lot of parameters. Or a *kernel* in KRR is simply a function acting on representations. If we write down all the parameters for all the components of a model we can reconstruct it easily. And if there is no state, we can reconstruct it *exactly*!\n",
-    "\n",
-    "#### Configs\n",
-    "\n",
-    "`Components` can always be described as specially formatted dictionaries, called *configs*. Having a canonical, *non-code*, format forces the parametrisation to be separated from the code implementing the models. It also means that everything can be easily serialised, and remains human readable. This is surprisingly useful, as we'll see later.\n",
-    "\n",
-    "The config format is:\n",
-    "\n",
-    "```\n",
-    "{\"kind\": { ... }}\n",
-    "```\n",
-    "\n",
-    "Where `kind` is a unique name for the type of component being instantiated (essentially, the class name) and the \"inner\" dictionary contains all arguments that would normally be passed into the `__init__` call.\n",
-    "\n",
-    "This format is designed to print nicely to [`yaml`](https://en.wikipedia.org/wiki/YAML), a simple data-serialization language. Here is, for instance, the entire `config` of a SOAP+KRR model:\n",
-    "\n",
-    "```yaml\n",
-    "model:\n",
-    "  per: cell\n",
-    "  regression:\n",
-    "    krr:\n",
-    "      kernel:\n",
-    "        kernel_atomic:\n",
-    "          kernelf:\n",
-    "            gaussian:\n",
-    "              ls: 90.50966799187809\n",
-    "      nl: 9.5367431640625e-07\n",
-    "  representation:\n",
-    "    ds_soap:\n",
-    "      cutoff: 3\n",
-    "      elems: [8, 13, 31, 49]\n",
-    "      l_max: 8\n",
-    "      n_max: 2\n",
-    "      rbf: gto\n",
-    "      sigma: 0.5\n",
-    "```\n",
-    "\n",
-    "Calling `cmlkit.from_yaml()` on the this produces an instance of `Model` with KRR as regression method and the SOAP representation (implemented in `dscribe`), with these parameters. We'll return to this later.\n",
-    "\n",
-    "#### Plugins\n",
-    "\n",
-    "This deserialisation scheme lends itself to a simple plugin system: Since at some point, `cmlkit` has to look up the classes associated with the `kind` string in the config dict, we can simply add external classes to this lookup table.\n",
-    "\n",
-    "In practice, `cmlkit` plugins are python packages that, at module level, have a `components` attribute which is a list of classes (sub-classes of `Component`) to be included in the registry. `cmlkit` knows about these through an environment variable, `CML_PLUGINS`. It's rudimentary -- but it works!\n",
-    "\n",
-    "#### Caching\n",
-    "\n",
-    "The `Component` concept also enables easy caching: Since `Components` produce outputs that are deterministic, we can store cached results with: a) the hash of the input, and b) the hash of the `Component`'s config. \n",
-    "\n",
-    "In practice, to avoid computing costly hashes of large amounts of data, we go one step further: A `Dataset` is only hashed at the very beginning, and then we simply store hashes for all `Components` applied to it, sequentially. Since this is entirely deterministic, this \"history\" can serve as hash for the input at any point in the pipeline!\n",
-    "\n",
-    "At the moment, caching support is quite experimental, but can be enables by passing `{\"cache\": \"disk\"}` into the context of a `Component`. Please keep in mind that this can quickly lead to large amounts of data being stored to disk, so proceed carefully! You'll see the caching system in action later.\n",
-    "\n",
-    "#### Data\n",
-    "\n",
-    "The \"history\" tracking for data as it is transformed by `Components` is implemented in a `cmlkit`-internal `Data` parent class, which complements the `Component` concept: `Components` are (mostly) pure functions acting on `Data` objects. Check out the `data` module in `cmlkit` for details!\n",
-    "\n",
-    "### Parts 🌳\n",
-    "\n",
-    "`cmlkit` implements `Components` for ML models that follow the representation + regressor pattern. Currently supported representations and regression methods are:\n",
-    "\n",
-    "#### Representations\n",
-    "- Many-Body Tensor Representation (MBTR) by [Huo, Rupp (2017)](https://arxiv.org/abs/1704.06439) (`qmmlpack` and `dscribe` implementation)\n",
-    "- Smooth Overlap of Atomic Positions (SOAP) representaton by [Bartók, Kondor, Csányi (2013)](https://doi.org/10.1103/PhysRevB.87.184115) (`quippy` and `dscribe` implementations)\n",
-    "- Symmetry Functions (SF) representation by [Behler (2011)](https://doi.org/10.1063/1.3553717) (`RuNNer` and `dscribe` implementation), with a semi-automatic parametrisation scheme taken from [Gastegger *et al.* (2018)](https://doi.org/10.1063/1.5019667)\n",
-    "\n",
-    "#### Regression methods\n",
-    "- Kernel Ridge Regression (KRR) as implemented in [`qmmlpack`](https://gitlab.com/qmml/qmmlpack) (supporting both global and local/atomic representations)\n",
-    "\n",
-    "\n",
-    "***\n",
-    "\n",
-    "And this brings us to the end of the tour of the `cmlkit` core. Let's get a bit more *practical*!\n",
-    "\n",
-    "\n"
+    "# We are currently working to restore this notebook as soon as possible, we apologize for the inconvenience."
    ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "## Loading Data and Model Basics\n",
-    "\n",
-    "In this chapter, we'll take a look at how to load datasets, how to make subsets, and how to define, train and evaluate models. We'll do all of this on a small toy subset of the `nmd18u` dataset, which was created for the [\"Nomad2018 Predicting Transparent Conductors\" Kaggle challenge](https://www.kaggle.com/c/nomad2018-predict-transparent-conductors) (Paper: [Sutton *et al.* (2019)](https://doi.org/10.1038/s41524-019-0239-3)). It consists of 3000 Al<sub>x</sub>Ga<sub>y</sub>In<sub>z</sub> oxides, and we'll predict their formation energy. Having learned these basics, we'll then move on to hyper-parameter optimisation for the same problem!\n",
-    "\n",
-    "\n",
-    "Let's start by finally importing `cmlkit`!"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:14:51.895588Z",
-     "start_time": "2020-03-04T23:14:50.784890Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "import cmlkit\n",
-    "\n",
-    "cmlkit.caches.location = \"data/cmlkit/cache\" # usually, this would be set using an environment variable"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Nice!\n",
-    "\n",
-    "### Datasets\n",
-    "\n",
-    "Let's now load some training data. `cmlkit` allows you to set a path (using the `CML_DATASET_PATH` environment variable) where it'll automatically look for named datasets, so you can load a dataset by simply giving its name. A `Dataset` is simply a container that collects the geometries of many systems (molecules or crystals) and the properties we're trying to interpolate in one place. (If you're interested in creating your own `Datasets`, please have a look at the appendix of this tutorial!)\n",
-    "\n",
-    "We'll now load the training set for the Kaggle challenge, and look at what `cmlkit` can tell us about the dataset."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:14:53.762582Z",
-     "start_time": "2020-03-04T23:14:51.899910Z"
-    },
-    "scrolled": true
-   },
-   "outputs": [],
-   "source": [
-    "nmd18_train = cmlkit.load_dataset(\"data/cmlkit/nmd18_train\")\n",
-    "print(nmd18_train.report)\n",
-    "# \"fe\" is short for formation energy, \"bg\" for bandgap, \"sg\" for spacegroup number"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Nice. We'll now generate our toy datasets for this chapter. `cmlkit.utility.threeway_split` is a convenience function to randomly draw two subsets, and also return the rest. (`_` is a python idiom that just means \"we don't care about this variable\".)"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:14:53.980365Z",
-     "start_time": "2020-03-04T23:14:53.782941Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "_, idx_toy_train, idx_toy_test = cmlkit.utility.threeway_split(nmd18_train.n, 80, 20)\n",
-    "toy_train = cmlkit.Subset.from_dataset(nmd18_train, idx=idx_toy_train)\n",
-    "toy_test = cmlkit.Subset.from_dataset(nmd18_train, idx=idx_toy_test)\n",
-    "print(f\"N_train = {toy_train.n}\")\n",
-    "print(f\"N_test = {toy_test.n}\")"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Now that we have some data, let's see how we can get a model trained!\n",
-    "\n",
-    "### Loading and Training a Model\n",
-    "\n",
-    "Here is a model config with all hyper-parameters already tuned. It consists of the SOAP representation, which transforms atomic positions in the neighbourhood of a central atom into a vector, which is used as *feature* for a kernel ridge regression model. The main component of the KRR model is the *kernel*, which computes a \"distance\" between two structures. In this case, we use an *atomic* kernel, which first compares all atoms in two structures, and then sums up the result to obtain a system-system kernel matrix.\n",
-    "\n",
-    "(This particular model comes from the [repbench](https://marcel.science/repbench) paper, its HPs were tuned for the `nmd18u` dataset at `Ntrain=1600`.)"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:14:54.008314Z",
-     "start_time": "2020-03-04T23:14:53.986770Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "config = {\n",
-    "    \"model\": {\n",
-    "        \"per\": \"cell\",  # this means that the model will internally be trained with energies per unit cell,\n",
-    "                        # this is needed because SOAP is a local representation, so the KRR model is extensive\n",
-    "        \"regression\": {\n",
-    "            \"krr\": {\n",
-    "                \"centering\": False,  # we do not subtract the mean from labels or kernel matrices\n",
-    "                \"kernel\": {\n",
-    "                    \"kernel_atomic\": {\n",
-    "                        \"kernelf\": {\"gaussian\": {\"ls\": 22.627416997969522}},  # gaussian kernel\n",
-    "                        \"norm\": False,  # we don't normalise the kernel by atom counts \n",
-    "                                        # (needed to predict intensive properties)\n",
-    "                    }\n",
-    "                },\n",
-    "                \"nl\": 9.5367431640625e-07,  # regularisation parameter for KRR\n",
-    "            }\n",
-    "        },\n",
-    "        \"representation\": {\n",
-    "            \"ds_soap\": {  # we use the SOAP implemented in dscribe\n",
-    "                \"cutoff\": 3,  # 3 angstrom cutoff radius\n",
-    "                \"elems\": [8, 13, 31, 49],  # elements in the dataset\n",
-    "                \"l_max\": 5,  # number of angular basis functions\n",
-    "                \"n_max\": 3,  # number of radial basis functions\n",
-    "                \"rbf\": \"gto\",# radial basis: Gaussians\n",
-    "                \"sigma\": 0.3535533905932738, # broadening\n",
-    "            }\n",
-    "        },\n",
-    "    }\n",
-    "}\n",
-    "\n",
-    "model = cmlkit.from_config(config)\n",
-    "print(model)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "This is the `config` mechanism at work! We've turned a dictionary into a whole hierarchy of objects, without having to import anything or write any additonal code. We're even using a class from outside `cmlkit`!"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:14:54.028714Z",
-     "start_time": "2020-03-04T23:14:54.015701Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "print(model.representation)  # <- not implemented in cmlkit, but in the cscribe plugin!\n",
-    "print(model.regression)\n",
-    "print(model.regression.kernel)\n",
-    "print(model.regression.kernel.kernelf)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Observe that the `model.representation` comes from a module outside of `cmlkit` itself, it's from the [`cscribe`](https://github.com/sirmarcel/cscribe) plugin, which provides an interface to [`dscribe`](https://github.com/SINGROUP/dscribe/), which in turn provides alternative implementations for common representations.\n",
-    "\n",
-    "So! To see how it works, we'll now train the model and predict some energies on our toy dataset.\n",
-    "\n",
-    "We train the model on the formation energy (\"`fe`\"):"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:08.992318Z",
-     "start_time": "2020-03-04T23:14:54.034916Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "model.train(toy_train, target=\"fe\")"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Well, that was not too difficult! Let's predict something."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:11.394550Z",
-     "start_time": "2020-03-04T23:15:08.995748Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "pred = model.predict(toy_test, per=\"cation\")\n",
-    "print(pred)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Okay... what is the ground truth?"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:11.413811Z",
-     "start_time": "2020-03-04T23:15:11.396955Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "true = toy_test.pp(\"fe\", per=\"cation\")  # pp means \"property per\"\n",
-    "print(true)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Whelp! We did ... not too well? But, given that we've trained on only 80 points, it's not too bad. \n",
-    "\n",
-    "Let's put this into qualitative terms and compute some popular loss functions:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:11.435051Z",
-     "start_time": "2020-03-04T23:15:11.425015Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "loss = cmlkit.evaluation.get_loss(\"rmse\", \"mae\", \"maxae\", \"r2\", \"rmsle\")\n",
-    "print(loss(true, pred))"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "(RMSE=root mean squared error, MAE=mean absolute error, MAXAE=maximum absolute error, R2=[Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) squared, RMSLE=root mean squared log error, the error used in the Sutten *et al.* (2019) paper. RMSE is an upper bound for MAE, and more sensitive to outliers.)\n",
-    "\n",
-    "For context, let's check the standard deviation of the `true` energy values:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "print(true.std())"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Very roughly speaking, we should aim for ~10% or less of the standard deviation for our MAE, or even better, the RMSE.  There is still room for improvement!\n",
-    "\n",
-    "***\n",
-    "\n",
-    "In essence, this concludes the \"core\" tutorial -- you now know the basics of how `cmlkit` is supposed to work, you can load models, and you know how to train and predict. At this point, you're in an excellent position to take a look at the [repository](https://github.com/sirmarcel/cmlkit) and take it from there!\n",
-    "\n",
-    "(But of course, then you'd be missing...)\n",
-    "\n",
-    "##  Hyper-parameter Optimisation\n",
-    "\n",
-    "We'll conclude the tutorial with the topic of hyper-parameter optimisation. Most state-of-the-art ML methods, representations and regression methods alike, have a number of free parameters that have a large impact on their performance, and no immediate way of setting these parameters. One approach, which we're exploring here, is to do it empirically: We pick some loss function (i.e. a quantitative measure of model quality) and minimise it using a global optimisation method. In this part of the tutorial, you'll first learn how such an optimiser is implemented in `cmlkit`, and finally, we'll explore the results of running such a search for `nmd18u`.\n",
-    "\n",
-    "### Exploring the Optimiser\n",
-    "\n",
-    "At its core, the `cmlkit.tune` module is an asynchronous, stochastic optimiser based on [`hyperopt`](https://github.com/hyperopt/hyperopt). It essentially allows you to specify a probability distribution over possible parameter choices, draws from that distribution, finds out which combination works best, and then updates the distribution to focus further on promising areas (if the \"Tree-Structured Parzen Estimators\" method is used, otherwise it's a random search). This type of method is most appropriate to high-dimensional search spaces, and can also deal with \"tree structured\" problems, where some paramaters are only needed based on a prior choice. (For a more formal introduction, please see [Bergstra *et al.* (2011)](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization).)\n",
-    "\n",
-    "Let's first explore the mechanics of the `cmlkit.tune` module. We'll start by building a toy problem:\n",
-    "\n",
-    "```\n",
-    "Let f(x, y, z) = 2*(x*y - 2.0)**2 + (z - 1)**2 ;\n",
-    "Find x, y, z in [0, 2] to minimise f.\n",
-    "```\n",
-    "\n",
-    "In this case, `f` is our loss function, and `x, y, z in [0, 2]` is our parameter search space.\n",
-    "\n",
-    "The search space is simply a dictionary with special \"tokens\" that tell the system to use a \"uniform\" prior for \"variable name\" from \"start value\" to \"end value\":"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:11.446861Z",
-     "start_time": "2020-03-04T23:15:11.441015Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "space = {\"x\": [\"hp_uniform\", \"var_x\", 0, 2],\n",
-    "         \"y\": [\"hp_uniform\", \"var_y\", 0, 2],\n",
-    "         \"z\": [\"hp_uniform\", \"var_z\", 0, 2],}"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Therefore, this `space` means that we would like to uniformly sample x, y and z from zero to two. (If you happen to be familiar with `hyperopt`, we are literally just calling the `hp.` functions here!)\n",
-    "\n",
-    "We implement `f`, our loss function, as a `Component` that follows the `Evaluator` interface:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:11.465445Z",
-     "start_time": "2020-03-04T23:15:11.451766Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "class F(cmlkit.engine.Component):\n",
-    "    kind = \"f\"\n",
-    "\n",
-    "    def __call__(self, model):\n",
-    "        x, y, z = model[\"x\"], model[\"y\"], model[\"z\"]\n",
-    "        loss = 2*(x*y - 2.0)**2 + (z - 1)**2\n",
-    "        return {\"loss\": loss}\n",
-    "\n",
-    "    def _get_config(self):\n",
-    "        return {}\n",
-    "\n",
-    "cmlkit.register(F)  # registering the Component\n",
-    "\n",
-    "evaluator = cmlkit.from_config({\"f\": {}})\n",
-    "print(evaluator)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Now we're basically all set to get going. Let's instantiate a `Search`, which will take care of the optimisation part of our task. Without any additional information, the search will simply return random samples from our search space:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:12.023830Z",
-     "start_time": "2020-03-04T23:15:11.468282Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "search = cmlkit.tune.Hyperopt(space, method=\"tpe\")\n",
-    "\n",
-    "suggestions = [search.suggest() for i in range(50)]\n",
-    "for i in range(5):\n",
-    "    print(suggestions[i])"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "(The first part of each of these is simply a unique identifier for this particular \"suggestion\", which we can later use to inform the search of the loss we have obtained for this suggestion.)\n",
-    "\n",
-    "Let's have a quick look at what the search has actually suggested by making a histogram of each of the variable suggestions:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:12.903563Z",
-     "start_time": "2020-03-04T23:15:12.029802Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "import matplotlib.pyplot as plt\n",
-    "import seaborn\n",
-    "%matplotlib inline\n",
-    "seaborn.set_context('notebook')\n",
-    "\n",
-    "fig, axs = plt.subplots(nrows=1, ncols=3)\n",
-    "fig.tight_layout(pad=1.0)\n",
-    "seaborn.distplot([s[1][\"x\"] for s in suggestions], ax=axs[0], kde=False, bins=8)\n",
-    "seaborn.distplot([s[1][\"y\"] for s in suggestions], ax=axs[1], kde=False, bins=8)\n",
-    "seaborn.distplot([s[1][\"z\"] for s in suggestions], ax=axs[2], kde=False, bins=8)\n"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "So we see, it is (somewhat) uniform across the search space. Let's now see how this changes if we return some information back to the search:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:12.919073Z",
-     "start_time": "2020-03-04T23:15:12.905745Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "for i in range(50):\n",
-    "    if suggestions[i][1][\"x\"] < 0.5:\n",
-    "        loss = 0.0\n",
-    "    else:\n",
-    "        loss = 100.0\n",
-    "    search.submit(i, loss=loss)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "We're now pretending that only \"x\" matters, passing a low loss only if `x < 0.5`. This will cause the search to focus on this region of the search space. Let's see it for ourselves!"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:16.115873Z",
-     "start_time": "2020-03-04T23:15:12.925081Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "new_suggestions = [search.suggest() for i in range(200)]\n",
-    "fig, axs = plt.subplots(nrows=1, ncols=3)\n",
-    "fig.tight_layout(pad=1.0)\n",
-    "seaborn.distplot([s[1][\"x\"] for s in new_suggestions], ax=axs[0], kde=False, bins=8)\n",
-    "seaborn.distplot([s[1][\"y\"] for s in new_suggestions], ax=axs[1], kde=False, bins=8)\n",
-    "seaborn.distplot([s[1][\"z\"] for s in new_suggestions], ax=axs[2], kde=False, bins=8)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Clearly, we have managed to shift the distribution towards `x < 0.5`.\n",
-    "\n",
-    "### Running an Optimisation\n",
-    "\n",
-    "With this basic understanding under our belt, we can now run an actual optimisation:\n",
-    "\n",
-    "(Don't be alarmed by the red output background.)"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:18.636243Z",
-     "start_time": "2020-03-04T23:15:16.119707Z"
-    },
-    "scrolled": false
-   },
-   "outputs": [],
-   "source": [
-    "search = cmlkit.tune.Hyperopt(space, method=\"tpe\")  # reset the search\n",
-    "run = cmlkit.tune.Run(\n",
-    "    search=search,\n",
-    "    evaluator=F(),\n",
-    "    stop={\"stop_max\": {\"count\": 100}},  # specify the stopping mechanism: simply run until 100 trials\n",
-    "    context={\"max_workers\": 1},  # number of parallel workers\n",
-    ")\n",
-    "run.prepare()  # get ready...\n",
-    "run.run()  # let's go!\n"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "On the backend, a couple of notable things happened just now:\n",
-    "\n",
-    "- We made a new `Run`, which gave itself a randomly assigned, human-readable name,\n",
-    "- It made a folder for itself, called `run_{name}` (that's what `.prepare` does),\n",
-    "- It ran the optimisation (essentially, getting suggestions, dispatching their evaluation to the workers, and submitting the results back to the `Search`),\n",
-    "- It stored the results in an internal database, and\n",
-    "- It wrote every step of the run into a linear record, the `tape`.\n",
-    "\n",
-    "It also wrote out this `tape` into the folder as a starting point for continuing the run, and the top 5 results as `.yaml` files. (And a periodically updated `status.txt` file that you can easily `watch cat` on a remote system.)\n",
-    "\n",
-    "First, let's have a look at the results:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:18.665774Z",
-     "start_time": "2020-03-04T23:15:18.641335Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "run.state.evals.top_suggestions()"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:18.687407Z",
-     "start_time": "2020-03-04T23:15:18.672209Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "# or alternatively, we can just read from disk:\n",
-    "print(cmlkit.read_yaml(run.work_directory / \"suggestion-0\"))\n",
-    "# this is the current best result."
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "As you can see, we've done *alright* on our toy optimisation problem. `hyperopt` is really not build for such low-dimensional problems! But, as you've suspected, winning the toy problem was not the real point...\n",
-    "\n",
-    "Let's have a quick peek behind the curtain at the `tape`, the record of the optimisation:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:18.958272Z",
-     "start_time": "2020-03-04T23:15:18.690377Z"
-    },
-    "scrolled": true
-   },
-   "outputs": [],
-   "source": [
-    "run.state.tape.raw[0:4]  # raw just gets us something subscriptable"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "We see that there are two types of entires: \n",
-    "\n",
-    "- `\"suggest\"`: We get a suggestion from the `Search`.\n",
-    "- `\"submit\"`: We return a result to the `Search`.\n",
-    "\n",
-    "If we replay them in order, we arrive at the exact same state of the optimisation. This is how we, for instance, restart a `Run`:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:23.720715Z",
-     "start_time": "2020-03-04T23:15:18.960536Z"
-    },
-    "scrolled": true
-   },
-   "outputs": [],
-   "source": [
-    "new = cmlkit.tune.Run.restore(\n",
-    "    run.work_directory,\n",
-    "    new_stop={\"stop_max\": {\"count\": 200}}, # we overwrite the stopping directive to run another 100 steps!\n",
-    ")\n",
-    "new.run()  # let's go"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Alright! At this point, we've seen most of what the `cmlkit.tune` module does. To end this tutorial, we will optimise an actual model, not a toy `Evaluator`. "
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "### A Real Problem\n",
-    "\n",
-    "We will use a search space for SOAP:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:23.787092Z",
-     "start_time": "2020-03-04T23:15:23.728658Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "space = {\n",
-    "    \"model\": {\n",
-    "        \"per\": \"cell\",\n",
-    "        \"regression\": {\n",
-    "            \"krr\": {\n",
-    "                \"kernel\": {\n",
-    "                    \"kernel_atomic\": {\n",
-    "                        \"norm\": False,\n",
-    "                        \"kernelf\": {\n",
-    "                            \"gaussian\": {\n",
-    "                                \"ls\": [\"hp_loggrid\", \"ls_start\", -13, 13, 27]  # base2 loggrid, i.e 2**-13 to 2**13\n",
-    "                            }\n",
-    "                        },\n",
-    "                    }\n",
-    "                },\n",
-    "                \"nl\": [\"hp_loggrid\", \"nl_start\", -18, 0, 19]\n",
-    "            }\n",
-    "        },\n",
-    "        \"representation\": {\n",
-    "            \"ds_soap\": {\n",
-    "                \"elems\": [8, 13, 31, 49],\n",
-    "                \"n_max\": [\"hp_choice\", \"n_max\", [2, 3, 4, 5, 6, 7, 8]],\n",
-    "                \"l_max\": [\"hp_choice\", \"l_max\", [2, 3, 4, 5, 6, 7, 8]],\n",
-    "                \"cutoff\": [\"hp_choice\", \"cutoff\", [2, 3, 4, 5, 6, 7, 8, 9, 10]],\n",
-    "                \"sigma\": [\"hp_loggrid\", \"sigma\", -20, 6, 27],\n",
-    "                \"rbf\": \"gto\",\n",
-    "            }\n",
-    "        },\n",
-    "    }\n",
-    "}\n"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "We'll use a \"simple\" evaluator, `cmlkit.tune.TuneEvaluatorHoldout`, which uses predefined training and test sets to compute a loss, without cross-validation.\n",
-    "\n",
-    "For parallelisation, these sets need to be saved to disk. In order to save us some time, I've already rolled these splits (from the original kaggle challenge training set) and saved them:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:25.383823Z",
-     "start_time": "2020-03-04T23:15:23.864202Z"
-    },
-    "scrolled": true
-   },
-   "outputs": [],
-   "source": [
-    "print(cmlkit.load_dataset(\"data/cmlkit/nmd18_hpo_train\").n)\n",
-    "print(cmlkit.load_dataset(\"data/cmlkit/nmd18_hpo_test\").n)\n",
-    "evaluator = cmlkit.tune.TuneEvaluatorHoldout(\n",
-    "    train=\"data/cmlkit/nmd18_hpo_train\", test=\"data/cmlkit/nmd18_hpo_test\", target=\"fe\", lossf=\"rmse\"\n",
-    ")"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "We can also instantiate our Search!"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:25.514758Z",
-     "start_time": "2020-03-04T23:15:25.387384Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "search = cmlkit.tune.Hyperopt(space, method=\"tpe\")\n",
-    "s = search.suggest()[1]\n",
-    "print(s)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Now, every sample from the `Search` is an entire `Model` config! We're ready to roll."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:25.575978Z",
-     "start_time": "2020-03-04T23:15:25.517233Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "search = cmlkit.tune.Hyperopt(space, method=\"tpe\")\n",
-    "run = cmlkit.tune.Run(\n",
-    "    search=search,\n",
-    "    evaluator=evaluator,\n",
-    "    stop={\"stop_max\": {\"count\": 400}},\n",
-    "    context={\"max_workers\": 2},\n",
-    "    name=\"nmd18_hpo\"\n",
-    ")"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Ok, I'll come clean -- we will not actually run this optimisation, to spare you the experience of waiting for a couple of hours. The `nmd18` dataset contains large structures, and periodic boundary conditions, which makes calculations for any sizeable non-toy training/test sets impractically slow for a tutorial like this. So, let's skip straight to the results!"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:42.200993Z",
-     "start_time": "2020-03-04T23:15:25.581911Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "run = cmlkit.tune.Run.checkout(\"data/cmlkit/run_nmd18_hpo\")"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "By the way, this is how you access the results of already finished runs! You simply run `checkout`.\n",
-    "\n",
-    "To end, let's check how our model would have done in the 2018 Kaggle challenge."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "end_time": "2020-03-04T23:15:42.236485Z",
-     "start_time": "2020-03-04T23:15:42.204038Z"
-    },
-    "scrolled": true
-   },
-   "outputs": [],
-   "source": [
-    "model_config = run.state.evals.top_suggestions()[0]\n",
-    "model = cmlkit.from_config(model_config, context={\"cache\": \"disk\"})  # we use pre-computed cached values for speed!"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "start_time": "2020-03-04T23:16:09.037Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "train = cmlkit.load_dataset(\"data/cmlkit/nmd18_train\")\n",
-    "test = cmlkit.load_dataset(\"data/cmlkit/nmd18_test\")\n",
-    "\n",
-    "model.train(train, target=\"fe\")\n",
-    "pred = model.predict(test, per=\"cation\")"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {
-    "ExecuteTime": {
-     "start_time": "2020-03-04T23:16:09.812Z"
-    }
-   },
-   "outputs": [],
-   "source": [
-    "true = test.pp(\"fe\", per=\"cation\")\n",
-    "loss = cmlkit.evaluation.get_loss(\"rmse\", \"rmsle\", \"mae\", \"r2\")\n",
-    "print(loss(true, pred))\n",
-    "print(f\"\\n\\nFor context: std of true values is {true.std():.3f}!\")"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Congratulations! We've gotten close to the top of the Kaggle 2018 challenge. (For the full results and discussion, please see [Sutton *et al.* (2019)](https://doi.org/10.1038/s41524-019-0239-3).)\n",
-    "\n",
-    "(Also, remember our results from the beginning? This time, we didn't rely on a pre-tuned model, we tuned all the parameters from scratch, and substantially *improved* on all the metrics. Nice!)\n",
-    "\n",
-    "## Next steps ☀️\n",
-    "\n",
-    "And with this, we're at the end of this tutorial. Well done!\n",
-    "\n",
-    "As a next step, I would recommend taking a good look at the [repository](https://github.com/sirmarcel/cmlkit/). The readme will essentially reiterate this tutorial, so you can skip it, but it is recommended to simply [browse the source  code](https://github.com/sirmarcel/cmlkit/tree/master/cmlkit). Every submodule has a detailed explanation of its mechanics and interfaces.\n",
-    "\n",
-    "If you're interested in how `cmlkit` is used in production, you can also take a look at [`repbench`]( https://marcel.science/repbench ), which is a benchmarking effort for multiple representations, and the origin of `cmlkit`. The repositories associated with the project will provide many examples of how to use `cmlkit` and what you can do with it.\n",
-    "\n",
-    "Start building! 🌱\n",
-    "\n",
-    "*festina lente*"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "<br /><br /><br />\n",
-    "\n",
-    "Thanks to Luca Ghiringelli, Xiaojuan Hu, Daniel Speckhard, Nikita Rybin, and Thomas Purcell for feedback on this tutorial.\n",
-    "\n",
-    "The technical requirements for running this tutorial are:\n",
-    "\n",
-    "- [`cmlkit`](https://marcel.science/cmlkit) with its dependencies (most notably `qmmlpack`)\n",
-    "- `cscribe` plugin for `cmlkit`\n",
-    "- `seaborn` (and `matplotlib`, etc.)\n",
-    "- `jupyter`\n",
-    "- For the `Dataset` appendix, also `nglview` and `ipywidgets` (have to be enabled for `jupyter`)\n",
-    "\n",
-    "You also need the tutorial repository, including the data, and the environment variable `CML_PLUGINS=cscribe` to be set.\n",
-    "\n"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "## Appendix: Datasets\n",
-    "\n",
-    "Since you probably won't be only using pre-made datasets, here is a quick introduction in how to work with `cmlkit` `Datasets`. This is, however, only a short overview -- for details, please consult the `cmlkit` source code, which will explain things in detail!\n",
-    "\n",
-    "If you're mostly want to know how to generate `Datasets`, you might also want to check out the [`repbench-datasets`](https://gitlab.com/repbench/repbench-datasets) repository, which contains many examples of how to convert different dataset formats to `cmlkit`.\n",
-    "\n",
-    "### Overview\n",
-    "\n",
-    "In essence, a `Dataset` collects two kinds of information: the *geometry* of a set of systems (molecules or periodic systems, i.e. materials or crystals), and scalar *properties* associated with each of these systems, such as formation energies, or bandgaps.\n",
-    "\n",
-    "Under the hood, this information is stored as `numpy` arrays: The geometric information as arrays `z`, for atomic numbers, `r` for positions, and `b` for basis vectors (if periodic systems are used). Properties are also stored in `numpy` arrays, as values in a dictionary called `p`. Let's have a look at our toy dataset from above:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "print(toy_test.z[0])\n",
-    "print(toy_test.r[0])\n",
-    "print(toy_test.b[0])"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "This gives us, in order: The elements in that structure, then the positions of each atom in the unit cell, then the three basis vectors. The properties here are bandgap (`bg`) and formation energy (`fe`) (as well as the spacegroup (`sg`), which we ignore for now):"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "print(toy_test.p.keys())"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "You can access these properties, normalised to various things, using the `pp` (property per) method:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "print(toy_test.pp(\"fe\", per=\"atom\"))  # fe per *atom*\n",
-    "print(toy_test.pp(\"fe\", per=\"cell\"))  # fe per *unit cell*\n",
-    "print(toy_test.pp(\"fe\", per=\"non_O\"))  # fe per *atom which is not Oxygen*"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "In addition, the `Dataset` provides various associated information:"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "print(toy_test.info.keys())"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "A `Dataset` can also have a *name*, which is used to load it from the file system via `cmlkit.load_dataset`."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "print(toy_test.name)\n",
-    "print(nmd18_train.name)"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "You can generate your own `Dataset` by simply instantiating it with `z` (`numpy` object array wrapping a `list`, each entry being an `numpy` array), `r` (same) and `b` (either `None` or a `numpy` array of basis vectors), and if you want to train, also at least one property `p`:\n",
-    "\n",
-    "```python\n",
-    "my_dataset = cmlkit.dataset.Dataset(z=my_z, r=my_r, b=my_b, p={\"my_prop\": my_property}, name=\"my_name\")\n",
-    "```\n",
-    "\n",
-    "***\n",
-    "\n",
-    "**One very important detail: `cmlkit` expects you to provide all properties normalised PER ATOM.** This is due to the built-in conversion facilities: You need one common basis to start from. You can also, which is not encouraged, pass un-normalised properties and then make sure all models have `per=None`, and you always use `per=None` so there is no conversion. Otherwise, things will GO WRONG.\n",
-    "\n",
-    "***\n",
-    "\n",
-    "Then, you will usually want to save it:\n",
-    "\n",
-    "```python\n",
-    "my_dataset.save(directory=\"/my/dataset/storage/path\")\n",
-    "```\n",
-    "\n",
-    "Currently, `cmlkit` also provides the `Subset` class which can be used to generate subsets of a \"parent\" dataset. We have used it above to make the train and test splits, for example. (In the future, this class will probably be deprecated in favour of only using `Dataset`, with a convenience function to generate subsets.) It can be used like this (`idx` being the indices you want to include in the `Subset`):\n",
-    "\n",
-    "```python\n",
-    "my_subset = cmlkit.dataset.Subset.from_dataset(my_dataset, idx=[1, 2, 4, ...], name=\"my_subset\")\n",
-    "```\n",
-    "\n",
-    "### `ase`\n",
-    "\n",
-    "The `Dataset` class also features a (very minimal, at the moment) interface to `ase`, consisting of two methods:\n",
-    "\n",
-    "```\n",
-    "dataset.as_Atoms() # => return geometries in dataset as list of Atoms object\n",
-    "dataset.from_Atoms(list_of_Atoms, p={...}) # => use geometries in Atoms objects to make a dataset\n",
-    "```\n",
-    "\n",
-    "You can use this, for example, to visualise structures using `ase`, or to use the many file formats supported by `ase`!"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "from ase.visualize import view\n",
-    "atoms = toy_test.as_Atoms()[0]\n",
-    "\n",
-    "view(atoms, viewer=\"nglview\")"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "metadata": {},
-   "source": [
-    "Please note that currenty, the `ase` interface does not know about `Calculators`, and so will not, for example, automatically extract energies from the `Atoms`. You have to pre-process these things yourself!"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "metadata": {},
-   "outputs": [],
-   "source": []
   }
  ],
  "metadata": {