diff --git a/docs/howto/customization/basics.md b/docs/howto/customization/basics.md
index c5ad63ce06527731cd152e6f17d0895a4bb5b74e..33e3e1910de062602701da502c019eed57f45e06 100644
--- a/docs/howto/customization/basics.md
+++ b/docs/howto/customization/basics.md
@@ -318,9 +318,43 @@ Here are a few other built-in section definitions and packages of definitions:
 |nomad.datamodel.EntryData|An abstract section definition for the `data` section.|
 |nomad.datamodel.ArchiveSection|Allows to put `normalize` functions into your section definitions.|
 |nomad.datamodel.metainfo.eln.*|A package of section definitions to inherit commonly used quantities for ELNs. These quantities are indexed and allow specialization to utilize the NOMAD search.|
-|nomad.parsing.tabular.TableData|Allows to inherit parsing of references .csv and .xls files.|
 |nomad.datamodel.metainfo.workflow.*|A package of section definitions use by NOMAD to define workflows|
 |nomad.metainfo.*|A package that contains all *definitions* of *definitions*, e.g. NOMAD's "schema language". Here you find *definitions* for what a sections, quantity, subsections, etc. is.|
+|nomad.parsing.tabular.TableData|Allows to inherit parsing of references .csv and .xls files. See the [detailed description](tabular.html) to learn how to include this class and its annotations in a yaml schema.|
+|nomad.datamodel.metainfo.basesections.HDF5Normalizer|Allows to link quantities to hdf5 dataset, improving performance for large data. This class and the related annotations are included in a yaml schema. [Dedicated classes](hdf5.md#how-to-use-hdf5-to-handle-large-quantities) can be used to write a parser.|
+
+### HDF5Normalizer
+
+A different flavor of _**reading**_ HDF5 files into NOMAD quantities is through defining a
+[custom schema](../../tutorial/custom.md) and inheriting `HDF5Normalizer` into base-sections. Two essential components
+of using `HDF5Normalizer` class is to first define a quantity that is annotated with `FileEditQuantity` field
+to enable one to drop/upload the `*.h5` file, and to define relevant quantities annotated with `path`
+attribute under `hdf5`. These quantities are then picked up by the normalizer to extract the values to be found
+denoted by the `path`.
+
+A minimum example to import your hdf5 and map it to NOMAD quantities is by using the following custom schema:
+
+```yaml
+definitions:
+  name: 'hdf5'
+  sections:
+    Test_HDF5:
+      base_sections:
+        - 'nomad.datamodel.data.EntryData'
+        - 'nomad.datamodel.metainfo.basesections.HDF5Normalizer'
+      quantities:
+        datafile:
+          type: str
+          m_annotations:
+            eln:
+              component: FileEditQuantity
+        charge_density:
+          type: np.float32
+          shape: [ '*', '*', '*' ]
+          m_annotations:
+            hdf5:
+              path: '/path/to/charge_density'
+```
 
 
 ## Separating data and schema package
diff --git a/docs/howto/customization/hdf5.md b/docs/howto/customization/hdf5.md
index 6b56d3ad5b12327d0170d3543e044cf533c84658..8c13fca1c92e208d56f1bbc13983bdf106944b77 100644
--- a/docs/howto/customization/hdf5.md
+++ b/docs/howto/customization/hdf5.md
@@ -10,7 +10,9 @@ cannot scale.
 To address the issue, the option to use auxiliary storage systems optimized for large
 data is implemented. In the following we discuss two quantity types to enable the writing
 of large datasets to HDF5: `HDF5Reference` and `HDF5Dataset`. These are defined in
-`nomad.datamodel.hdf5`.
+`nomad.datamodel.hdf5`. Another class called `HDF5Normalizer` defined in
+`nomad.datamodel.metainfo.basesections` can be inherited
+[and used directly in a yaml schema](basics.md#hdf5normalizer).
 
 ## HDF5Reference
 
@@ -74,38 +76,7 @@ file in another upload, follow the same form for
 To read a dataset, use `read_dataset` and provide a reference. This will return the value
 cast in the type of the dataset.
 
-## HDF5Normalizer
-
-A different flavor of _**reading**_ HDF5 files into NOMAD quantities is through defining a
-[custom schema](../../tutorial/custom.md) and inheriting `HDF5Normalizer` into base-sections. Two essential components
-of using `HDF5Normalizer` class is to first define a quantity that is annotated with `FileEditQuantity` field
-to enable one to drop/upload the `*.h5` file, and to define relevant quantities annotated with `path`
-attribute under `hdf5`. These quantities are then picked up by the normalizer to extract the values to be found
-denoted by the `path`.
-
-A minimum example to import your hdf5 and map it to NOMAD quantities is by using the following custom schema:
-
-```yaml
-definitions:
-  name: 'hdf5'
-  sections:
-    Test_HDF5:
-      base_sections:
-        - 'nomad.datamodel.data.EntryData'
-        - 'nomad.datamodel.metainfo.basesections.HDF5Normalizer'
-      quantities:
-        datafile:
-          type: str
-          m_annotations:
-            eln:
-              component: FileEditQuantity
-        charge_density:
-          type: np.float32
-          shape: [ '*', '*', '*' ]
-          m_annotations:
-            hdf5:
-              path: '/path/to/charge_density'
-```
+
 
 ## HDF5Dataset
 To use HDF5 storage for archive quantities, one should use `HDF5Dataset`.
@@ -120,7 +91,7 @@ class LargeData(ArchiveSection):
 The assigned value will also be written to the archive HDF5 file and serialized as
 `/uploads/test_upload/archive/test_entry#/data/value`.
 
-To read the dataset, one shall use the context manager to ensure the file is closed properly when done.
+To read the dataset, one shall use the context manager `with` to ensure the file is closed properly when done.
 
 ```python
 archive.data.value = np.ones(3)
@@ -128,24 +99,75 @@ archive.data.value = np.ones(3)
 serialized = archive.m_to_dict()
 serialized['data']['value']
 # '/uploads/test_upload/archive/test_entry#/data/value'
+
 deserialized = archive.m_from_dict(serialized, m_context=archive.m_context)
 with deserialized.data.value as dataset:
     print(dataset[:])
 # array([1., 1., 1.])
 ```
 
+It is possible to assign to an archive quantity an array or another archive quantity.
+In the second case, the dataset created in the HDF5 file will contain a link and not a copy of the array:
+
+```python
+from nomad.datamodel.hdf5 import HDF5Dataset
+
+class LargeData(ArchiveSection):
+    value_1 = Quantity(type=HDF5Dataset)
+    value_2 = Quantity(type=HDF5Dataset)
+```
+
+```python
+archive.data.value_1 = np.ones(3)
+
+archive.data.value_2 = archive.data.value_1
+```
+
+
 ## Visualizing archive HDF5 quantities
 
 NOMAD clients (e.g. NOMAD UI) can pick up on these HDF5 serialized quantities and
 provide respective functionality (e.g. showing a H5Web view).
 
-The [H5WebAnnotation class](../../reference/annotations.html#h5web) contains the attributes that NOMAD will include in the HDF5 file.
-
 <figure markdown>
   ![h5reference](../images/hdf5reference.png)
   <figcaption>Visualizing archive HDF5 reference quantity using H5Web.</figcaption>
 </figure>
 
+When multiple quantities need to be displayed in the same plot, some attributes in the HDF5 file groups are needed,
+in order for h5web to be able to render a plot.
+The [H5WebAnnotation class](../../reference/annotations.html#h5web) contains the attributes to be included in the groups of HDF5 file,
+provided as section annotations.
+
+In the following example, the `value` quantity has a dedicated default h5web rendering.
+Adding some annotation in the corresponding section would trigger another plot rendering, where `value` vs. `time` plot is shown.
+
+```
+class SubstrateHeaterPower(ArchiveSection):
+
+    m_def = Section(a_h5web=H5WebAnnotation(axes='time', signal='value'))
+
+    value = Quantity(
+        type=HDF5Dataset,
+        unit='dimensionless',
+        shape=[],
+        a_h5web=H5WebAnnotation(
+            long_name='power',
+        ),
+    )
+    time = Quantity(
+        type=HDF5Dataset,
+        unit='s'
+        shape=[],
+    )
+```
+
+<figure markdown>
+  ![h5reference](../images/hdf5annotations.png)
+  <figcaption>Including attributes to HDF5 groups to have composite plots using H5Web.</figcaption>
+</figure>
+
+
 
 ## Metadata for large quantities
 
diff --git a/docs/howto/images/hdf5annotations.png b/docs/howto/images/hdf5annotations.png
new file mode 100644
index 0000000000000000000000000000000000000000..6f0e80b19313deae6264d13c5c551cda5aef4d93
Binary files /dev/null and b/docs/howto/images/hdf5annotations.png differ
diff --git a/gui/src/components/archive/ArchiveBrowser.js b/gui/src/components/archive/ArchiveBrowser.js
index 6aabe87acb04a0ff701b14483f1452cea76b3065..dc5b1c718950786005b66e798a83c48ef28c0b5e 100644
--- a/gui/src/components/archive/ArchiveBrowser.js
+++ b/gui/src/components/archive/ArchiveBrowser.js
@@ -969,7 +969,7 @@ export function H5WebView({section, def, uploadId, title}) {
     const sectionPath = h5Path.split('/').slice(0, -1).join('/')
     return <H5Web key={def.name} upload_id={h5UploadId || uploadId} filename={h5File} initialPath={sectionPath} source={h5Source} sidebarOpen={false}></H5Web>
   }
-  const resolve = (path, parent) => {
+  const resolve = (path, parent, m_def) => {
     let child = parent
     for (const segment of path.split('/')) {
       const properties = child?._properties || child
@@ -980,6 +980,13 @@ export function H5WebView({section, def, uploadId, title}) {
       child = isNaN(index) ? properties[segment] : properties[index] || child
       child = child?.sub_section || child
     }
+    if (m_def && m_def !== child._qualifiedName) {
+      for (const section of child?._allInternalInheritingSections || []) {
+        if (section._qualifiedName === m_def) {
+          return section
+        }
+      }
+    }
     return child
   }
 
@@ -989,7 +996,7 @@ export function H5WebView({section, def, uploadId, title}) {
     {h5Web(section, def)}
     {paths.map(path => {
       const subSection = resolve(path, section)
-      const subSectionDef = resolve(path, def)
+      const subSectionDef = resolve(path, def, subSection?.m_def)
       return subSection && subSectionDef && h5Web(subSection, subSectionDef)
     })}
   </Compartment>
diff --git a/nomad/datamodel/hdf5.py b/nomad/datamodel/hdf5.py
index b32422ecd0612331cb6044e6b120b8ee5c4b836f..1ffa1e833b6a6705dd4482bf5d9000523ebe0a8a 100644
--- a/nomad/datamodel/hdf5.py
+++ b/nomad/datamodel/hdf5.py
@@ -67,6 +67,28 @@ class HDF5Wrapper:
 
 
 class HDF5Reference(NonPrimitive):
+    """
+    A utility class for handling HDF5 file operations within a given archive.
+
+    This class provides static methods to read, write, and manage datasets and attributes
+    in HDF5 files. It ensures that the operations are performed correctly by resolving
+    necessary file and upload information from the provided paths and archives.
+
+    Methods
+    -------
+    _get_upload_files(archive, path: str)
+        Retrieves the file ID, path, and upload files object from the given path.
+
+    write_dataset(archive, value: Any, path: str) -> None
+        Writes a given value to an HDF5 file at the specified path.
+
+    read_dataset(archive, path: str) -> Any
+        Reads a dataset from an HDF5 file at the specified path.
+
+    _normalize_impl(self, value, **kwargs)
+        Validates if the given value matches the HDF5 reference format.
+    """
+
     @staticmethod
     def _get_upload_files(archive, path: str):
         match = match_hdf5_reference(path)
@@ -78,24 +100,62 @@ class HDF5Reference(NonPrimitive):
         from nomad import files
         from nomad.datamodel.context import ServerContext
 
-        upload_id, _ = ServerContext._get_ids(archive, required=True)
+        upload_id = match['upload_id']
+        if not upload_id:
+            upload_id, _ = ServerContext._get_ids(archive, required=True)
         return file_id, match['path'], files.UploadFiles.get(upload_id)
 
     @staticmethod
-    def write_dataset(archive, value: Any, path: str) -> None:
+    def write_dataset(
+        archive, value: Any, path: str, attributes: dict = {}, quantity_name: str = None
+    ) -> None:
         """
         Write value to HDF5 file specified in path following the form
         filename.h5#/path/to/dataset. upload_id is resolved from archive.
-        """
 
+        If attributes is provided, will write them to the hdf5 dataset referenced by
+        archive quantity with name quantity_name and to its parent group.
+        """
         file, path, upload_files = HDF5Reference._get_upload_files(archive, path)
         mode = 'r+b' if upload_files.raw_path_is_file(file) else 'wb'
         with h5py.File(upload_files.raw_file(file, mode), 'a') as f:
-            f.require_dataset(
-                path,
-                shape=getattr(value, 'shape', ()),
-                dtype=getattr(value, 'dtype', None),
-            )[...] = getattr(value, 'magnitude', value)
+            if path in f:
+                del f[path]
+            f[path] = getattr(value, 'magnitude', value)
+            dataset = f[path]
+
+            if attributes and quantity_name:
+                path_segments = path.rsplit('/', 1)
+                if len(path_segments) == 1:
+                    return
+
+                unit = attributes.get('unit')
+                long_name = attributes.get('long_name')
+                if unit:
+                    unit = unit if isinstance(unit, str) else format(unit, '~')
+                    dataset.attrs['units'] = unit
+                    long_name = f'{long_name} ({unit})'
+                if long_name:
+                    dataset.attrs['long_name'] = long_name
+
+                target_attributes = {
+                    key: val
+                    for key, val in attributes.items()
+                    if key not in ['signal', 'axes']
+                }
+                target_attributes['NX_class'] = 'NXdata'
+                signal = attributes.get('signal')
+                if quantity_name == signal:
+                    target_attributes['signal'] = path_segments[1]
+
+                axes = attributes.get('axes', [])
+                axes = [axes] if isinstance(axes, str) else axes
+                if quantity_name in axes:
+                    axes[axes.index(quantity_name)] = path_segments[1]
+                    target_attributes['axes'] = axes
+
+                group = dataset.parent
+                group.attrs.update(target_attributes)
 
     @staticmethod
     def read_dataset(archive, path: str) -> Any:
@@ -115,6 +175,22 @@ class HDF5Reference(NonPrimitive):
 
 
 class HDF5Dataset(NonPrimitive):
+    """
+    A utility class for handling HDF5 dataset operations within a given archive.
+
+    This class provides methods to serialize and normalize HDF5 datasets, ensuring
+    that the datasets are correctly referenced and stored within the HDF5 files.
+
+    Methods
+    -------
+    _serialize_impl(self, value, **kwargs)
+        Serializes the given value into a reference string for HDF5 storage.
+
+    _normalize_impl(self, value, **kwargs)
+        Normalizes the given value, ensuring it is a valid HDF5 dataset reference
+        or a compatible data type (e.g., str, np.ndarray, h5py.Dataset, pint.Quantity).
+    """
+
     def _serialize_impl(self, value, **kwargs):
         if isinstance(value, str):
             return value