diff --git a/cmlkit-tutorial.ipynb b/cmlkit-tutorial.ipynb
index 819173e686916263def070d444437bcc089d6094..ed6ca9d12a9d0595be61c4b829b30a5e62d5656d 100644
--- a/cmlkit-tutorial.ipynb
+++ b/cmlkit-tutorial.ipynb
@@ -11,7 +11,7 @@
     "\n",
     "<sup>‡</sup><i>Fritz Haber Institute of the Max Planck Society, Faradayweg 4-6, D-14195 Berlin, Germany</i>\n",
     "\n",
-    "**Related publication: [Langer, Gößmann, Rupp (2020)](http://marcel.science/repbench/)**\n",
+    "**Related publication: [Langer, Goeßmann, Rupp (2020)](http://marcel.science/repbench/)**\n",
     "\n",
     "***\n",
     "\n",
@@ -324,12 +324,12 @@
     "                        # this is needed because SOAP is a local representation, so the KRR model is extensive\n",
     "        \"regression\": {\n",
     "            \"krr\": {\n",
-    "                \"centering\": False,  # we subtract the mean from labels or kernel matrices\n",
+    "                \"centering\": False,  # we do not subtract the mean from labels or kernel matrices\n",
     "                \"kernel\": {\n",
     "                    \"kernel_atomic\": {\n",
     "                        \"kernelf\": {\"gaussian\": {\"ls\": 22.627416997969522}},  # gaussian kernel\n",
     "                        \"norm\": False,  # we don't normalise the kernel by atom counts \n",
-    "                                        # (needed to predict intensive things)\n",
+    "                                        # (needed to predict intensive properties)\n",
     "                    }\n",
     "                },\n",
     "                \"nl\": 9.5367431640625e-07,  # regularisation parameter for KRR\n",
@@ -527,7 +527,7 @@
     "\n",
     "### Exploring the Optimiser\n",
     "\n",
-    "At its core, the `cmlkit.tune` module is an asynchronous, stochastic optimiser based on [`hyperopt`](https://github.com/hyperopt/hyperopt). It essentially allows you do specify a probability distribution over possible parameter choices, draws from that distribution, finds out which combination works best, and then updates the distribution to focus further on promising areas (if the \"Tree-Structured Parzen Estimators\" method is used, otherwise it's a random search). This type of method is most appropriate to high-dimensional search spaces, and can also deal with \"tree structured\" problems, where some paramaters are only needed based on a prior choice. (For a more formal introduction, please see [Bergstra *et al.* (2011)](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization).)\n",
+    "At its core, the `cmlkit.tune` module is an asynchronous, stochastic optimiser based on [`hyperopt`](https://github.com/hyperopt/hyperopt). It essentially allows you to specify a probability distribution over possible parameter choices, draws from that distribution, finds out which combination works best, and then updates the distribution to focus further on promising areas (if the \"Tree-Structured Parzen Estimators\" method is used, otherwise it's a random search). This type of method is most appropriate to high-dimensional search spaces, and can also deal with \"tree structured\" problems, where some paramaters are only needed based on a prior choice. (For a more formal introduction, please see [Bergstra *et al.* (2011)](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization).)\n",
     "\n",
     "Let's first explore the mechanics of the `cmlkit.tune` module. We'll start by building a toy problem:\n",
     "\n",