diff --git a/dimensionality_reduction.ipynb b/dimensionality_reduction.ipynb
index 7808e30d9b51d16692855e2050c74cc940af4b75..2254064ce001f4d28ef163189c4c0124b9892849 100644
--- a/dimensionality_reduction.ipynb
+++ b/dimensionality_reduction.ipynb
@@ -20,7 +20,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "This tutorial is dedicated to dimensionality reduction techniques (DDR). DDR is a group of methods that transform data from high-dimensional feature space to a low-dimensional feature space. In other words, these methods reduce the number of features either by manually selecting the best representative features (first class) or by combining and creating new representative features (second class). The ultimate goal of DDR techniques is to preserve the variation of the original dataset as much as possible while reducing the number of features. In this tutorial, we focus on the second class of DDR techniques and explain some of the well-known methods in detail and with practical examples. Part of this notebook is inspired by the book \"An Introduction to Statistical Learning: with Applications in R\" by Robert Tibshirani et al. We thank the authors of this book. \n",
+    "This tutorial is dedicated to dimension reduction techniques (DDR). DDR is a group of methods that transform data from high-dimensional feature space to a low-dimensional feature space. In other words, these methods reduce the number of features either by manually selecting the best representative features (first class) or by combining and creating new representative features (second class). The ultimate goal of DDR techniques is to preserve the variation of the original dataset as much as possible while reducing the number of features. In this tutorial, we focus on the second class of DDR techniques and explain some of the well-known methods in detail and with practical examples. Part of this notebook is inspired by the book \"An Introduction to Statistical Learning: with Applications in R\" by Robert Tibshirani et al. We thank the authors of this book. \n",
     "\n",
     "<div style=\"padding: 1ex; margin-top: 1ex; margin-bottom: 1ex; border-style: dotted; border-width: 1pt; border-color: blue; border-radius: 3px;\">\n",
     "James, Gareth and Witten, Daniela and Hastie, Trevor and Tibshirani, Robert: <span style=\"font-style: italic;\">An Introduction to Statistical Learning: with Applications in R</span>, Springer New York, 2014, 1461471370, 9781461471370 <a href=\"https://link.springer.com/book/10.1007/978-1-4614-7138-7#authorsandaffiliationsbook\" target=\"_blank\">[PDF]</a> .\n",
@@ -53,7 +53,7 @@
     "</td>\n",
     "</tr></table>\n",
     "\n",
-    "In most realistic machine learning problems, we need to deal with a large number of features, i.e., a multidimensional dataset that makes it difficult and challenging for us to analyze, visualize and fit a machine learning model. In such problems, one instead of dealing with many features can reduce the number of features by applying a set of methods called data-dimensionality reduction (DDR) techniques. \n",
+    "In most realistic machine learning problems, we need to deal with a large number of features, i.e., a multidimensional dataset that makes it difficult and challenging for us to analyze, visualize and fit a machine learning model. In such problems, one instead of dealing with many features can reduce the number of features by applying a set of methods called data-dimension reduction (DDR) techniques. \n",
     "\n",
     "In all DDR methods, first, a new set of transformed features are created. For instance, this can be done by a linear combination of the original features and creating a new set of features as follows:\n",
     "\n",
@@ -98,7 +98,7 @@
     "    \n",
     "</div> -->\n",
     "\n",
-    "Data dimensionality reduction methods are generally divided into two classes depending on how they reduce the number of features:\n",
+    "Data dimension reduction methods are generally divided into two classes depending on how they reduce the number of features:\n",
     "\n",
     "1. Select the best representative features\n",
     "\n",
@@ -630,7 +630,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "UMAP is a novel and powerful data dimensionality reduction method (DDR) introduced in 2018 by Leland McInnes and his colleagues. UMAP not only reduce the dimension but also preserves the clusters and the relationship between them. Although this method is very much similar to t-SNE (they both contruct a high dimenstional graph and map it to a low dimensional graph) its mathematical foundation is different. UMAP has some advantages that makes it more practical than t-SNE. For instance, UMAP is much faster than t-SNE and it preserves the global structure of the data while t-SNE more focuses on local structures not the global structure. These advantages led UMAP to replace t-SNE in the scientific community.\n",
+    "UMAP is a novel and powerful data dimensionality reduction method (DDR) introduced [in 2018 by Leland McInnes and his colleagues](https://www.youtube.com/watch?v=nq6iPZVUxZU). UMAP not only reduces the dimension but also preserves the clusters and the relationship between them. Although this method has a similar purpose as t-SNE (they both contruct a high-dimenstional graph and map it to a low dimensional graph), its mathematical foundation is different. UMAP has some advantages that makes it more practical than t-SNE. For instance, UMAP is much faster than t-SNE and it preserves the global structure of the data while t-SNE focuses more on local structures rather than the global structure. These advantages are leading UMAP to steadly replace t-SNE in the scientific community.\n",
     "\n",
     "<div style=\"padding: 1ex; margin-top: 1ex; margin-bottom: 1ex; border-style: dotted; border-width: 1pt; border-color: blue; border-radius: 3px;\">\n",
     "Wang, Quan : <span style=\"font-style: italic;\"> UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction </span> arXiv, 2018, 1802.03426 <a href=\"\n",
@@ -640,7 +640,7 @@
     "\n",
     "## How does UMAP work?\n",
     "\n",
-    "UMAP algorithm works in two steps:\n",
+    "We explain the UMAP based on its [documentation](https://umap-learn.readthedocs.io/en/latest/how_umap_works.html). UMAP algorithm works in two steps:\n",
     "\n",
     "1. Contruct a high dimensional graph\n",
     "2. Find the low dimensional representation of that graph\n",
@@ -650,7 +650,7 @@
     "UMAP first approximates a manifold on which the data lie. Since we are mostly dealing with finite data, it is assumed that the data is uniformly distributed on the manifold. But in the real world, the data is not so distributed, so we need to assume that the manifold has a Riemannian metric, and then find a metric for which the data is approximately uniformly distributed. \n",
     "\n",
     "\n",
-    "Then we make simple combinatorial building blocks, called \"simplicies\" out of data. k-simplex is created by taking the convex hull of k+1 independent points. In the following picture low dimensional simplices are shown. \n",
+    "Then we make simple combinatorial building blocks, called \"simplicies\" out of data. Each k-simplex is created by taking the convex hull of k+1 independent points. In the following picture low dimensional simplices are shown. \n",
     "\n",
     "<table><tr>\n",
     "<td> \n",
@@ -662,11 +662,11 @@
     "</td>\n",
     "</tr></table>\n",
     "\n",
-    "For our finite data, we can do this by creating a ball with a fixed radius for each data point. Since we assume that the data is uniformly distributed on the manifold, it is easy to choose an appropriate radius; otherwise, a small radius leads to many connected components and a large radius leads to some very high-dimensional simplices that we do not want to work with. As we mentioned earlier, we need to use Reimanian geometry for this assumption. This means that each point has a local metric space associated with it so that we can measure the distance in a meaningful way. In other word, we compute a local notion of distance for each point. In practice, this means that a unit ball around a point extends to the kth nearest neighbor of the point. k is the size of the sample we use to approximate the local notion of distance. Since each point has its own distance function, we can simply select ball of radius one with respect to this local distance function. \n",
+    "For our finite data, we can do this by creating a ball with a fixed radius for each data point. Since we assume that the data is uniformly distributed on the manifold, it is easy to choose an appropriate radius; otherwise, a small radius leads to many connected components and a large radius leads to some very high-dimensional simplices that we do not want to work with. As we mentioned earlier, we need to use Reimanian geometry for this assumption. This means that each point has a local metric space associated with it so that we can measure the distance in a meaningful way. Actaully we consider a patch for each point and we map it down to an standard euclidean space in a different scale. So there is a different notion of distance for each point depending on where they are located (in a denser and sparser area). In practice, this means that a unit ball around a point extends to the kth nearest neighbor of the point. k is the size of the sample we use to approximate the local notion of distance. Since each point has its own distance function, we can simply select ball of radius one with respect to this local distance function. \n",
     "\n",
     "k is an important parameter ($n$_$neighbor$) in UMAP that determines how local we want to estimate the Riemannian metric. Thus, a small value of k gives us more details about the local structures, while a large value of k gives us more information about the global structures.\n",
     "\n",
-    "With the Riemannian metric we can even work in a fuzzy topology, which means that there is a certanity for each point to be located in a ball of a given radius (For each point in the neighborhood of a given point, this certainty is also called weights or similarity values). The further we move away from the center of each point, the smaller this certainty becomes. To avoid isolating the data points, especially in high dimensions, we activate fuzzy certainty beyond the first nearest neighbor. This ensures that each point on the manifold has at least one neighbor to which it is connected (certainty for the first neighbor is 100% and for other is between 0% to 100%). This is called local connectivity in UMAP and has a hyperparameter ($local$_$connectivity$) that determines the least number of connections. \n",
+    "With the Riemannian metric we can even work in a fuzzy topology, which means that there is a certanity for each point to be located in a ball of a given radius (fuzzy value or weights). The further we move away from the center of each point, the smaller this certainty becomes. To avoid isolating the data points, especially in high dimensions, we activate fuzzy certainty beyond the first nearest neighbor. This ensures that each point on the manifold has at least one neighbor to which it is connected (certainty for the first neighbor is 100% and for other is between 0% to 100%). This is called local connectivity in UMAP and has a hyperparameter ($local$_$connectivity$) that determines the least number of connections. \n",
     "\n",
     "\n",
     "<table><tr>\n",
@@ -765,6 +765,34 @@
     "plot.tick_params(labelsize=20)"
    ]
   },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
   {
    "cell_type": "code",
    "execution_count": null,
diff --git a/metainfo.json b/metainfo.json
index ed0826f8cef37a0b8668ea94cf7334a8767b1840..991ff918579ff29430b92efa7ee7ff29047b5d01 100644
--- a/metainfo.json
+++ b/metainfo.json
@@ -4,13 +4,13 @@
     "Ghiringhelli, Luca M."
   ],
   "email": "ghiringhelli@fhi-berlin.mpg.de",
-  "title": "An introduction to Data-Dimensionality Reduction",
+  "title": "Introduction to dimension reduction of data",
   "description": "In this tutorial...",
   "notebook_name": "dimensionality_reduction.ipynb",
   "url": "",
   "link": "",
   "link_public": "",
-  "updated": "2022-07-01",
+  "updated": "2022-06-30",
   "flags":{
     "featured": true,
     "top_of_list": false