diff --git a/README.md b/README.md
index fa7b0a68c66c6df0dc468165ba621e6296cfda0f..f6a7269b8c4b082fc68e4491b7a766cbb4a08001 100644
--- a/README.md
+++ b/README.md
@@ -1,60 +1,81 @@
 # Kubernetes on the MPCDF HPC Cloud
 
-The [Heat orchestration template](https://docs.openstack.org/heat/ussuri/template_guide/hot_spec.html) "Magnum ohne Magnum" (MOM) described below automates the deployment of a production-ready Kubernetes cluster on the MPCDF [HPC Cloud](https://docs.mpcdf.mpg.de/doc/computing/cloud/), including "out-of-the-box" [support](https://github.com/kubernetes/cloud-provider-openstack) for persistent storage and load balancers.
-For an equivalent, non-templatized procedure, see the [step-by-step](step-by-step/) version.
+The [Heat orchestration template](https://docs.openstack.org/heat/ussuri/template_guide/hot_spec.html) "Magnum ohne
+Magnum" (MOM) described below automates the deployment of a production-ready Kubernetes cluster on the MPCDF
+[HPC Cloud](https://docs.mpcdf.mpg.de/doc/computing/cloud/), including "out-of-the-box"
+[support](https://github.com/kubernetes/cloud-provider-openstack) for persistent storage and load balancers. For an
+equivalent, non-templatized procedure, see the [step-by-step](step-by-step/) version.
 
 ## Deployment
 
 ### Dashboard
 
-1. [Create](https://hpccloud.mpcdf.mpg.de/dashboard/identity/application_credentials/create/) an application credential with default settings.
-   Record the secret somewhere safe.
-2. [Launch](https://hpccloud.mpcdf.mpg.de/dashboard/project/stacks/select_template) a new orchestration stack.
-    - Select the template `mom-template.yaml` as a local file or [URL](https://gitlab.mpcdf.mpg.de/mpcdf/cloud/kubernetes/-/raw/master/mom-template.yaml).
-    - Provide (at least) the application credential id and secret, as well as the keypair you want to use to login to the controller node.
-      Note that "Password for user..." can be any string, since it is not actually used by this particular template.
-
-### Command-line
-
+1. [Create](https://hpccloud.mpcdf.mpg.de/dashboard/identity/application_credentials/create/) an application credential
+   with default settings. Record the secret somewhere safe.
 ```sh
 openstack application credential create $APP_CRED_NAME
+```
+2. [Launch](https://hpccloud.mpcdf.mpg.de/dashboard/project/stacks/select_template) a new orchestration stack.
+    - Select the template `mom-template.yaml` as a local file or
+      [URL](https://gitlab.mpcdf.mpg.de/mpcdf/cloud/kubernetes/-/raw/master/mom-template.yaml).
+    - Provide (at least) the application credential id and secret, as well as the keypair you want to use to login to
+      the SSH gateway node.
+```sh
 edit mom-env.yaml  # fill-in (at least) the required parameters
 openstack stack create $STACK_NAME -t mom-template.yaml -e mom-env.yaml
 ```
 
+
 #### Scaling
 
-The number and/or size of the worker nodes may be changed after the initial deployment, as well as the size of the controller.
-The command-line client makes this easy, for example:
+The number and/or size of the worker nodes may be changed after the initial deployment, as well as the size of the
+controller. The command-line client makes this easy, for example:
 ```sh
 openstack stack update $STACK_NAME --existing --parameter worker_count=$COUNT
 ```
-Only the changed parameters need to be mentioned.
-When changing the worker flavor, there will be a rolling reboot of the nodes, one per 90 seconds.
-(Scaling is also possible via the dashboard through the "Change Stack Template" action.
-Be sure to provide the **exact** same version of the template.)
+Only the changed parameters need to be mentioned. When changing the worker flavor, there will be a rolling reboot of
+the nodes, one per 90 seconds. Scaling is also possible via the dashboard through the "Change Stack Template" action.
+Be sure to provide the **exact** same version of the template.
+
 
 ## Administration
 
-Login to the controller via its external IP, found on the [dashboard](https://hpccloud.mpcdf.mpg.de/dashboard/project/stacks/) in the "Output" section of the "Overview" tab or with `openstack stack output show $STACK_NAME controlplane_ip -f value -c output_value`:
+You can  login to the gateway via its external IP, found on the
+[dashboard](https://hpccloud.mpcdf.mpg.de/dashboard/project/stacks/) in the
+"Output" section of the "Overview" tab or with:
+
+```sh
+openstack stack output show STACK_NAME gateway_ip -f value -c output_value
+ssh GATEWAY_IP -l root
+```
+
+If you are not in the Garching campus network you will need to use one of the
+SSH gateways to reach the gateway machine, more information see the
+[connecting](https://docs.mpcdf.mpg.de/faq/connecting.html) documentation.
+
+The tools `kubectl` and `helm` as well as the administrative credentials for
+your Kubernetes cluster are installed on the SSH Gateway. Try:
 ```sh
-ssh root@$CONTROLPLANE_IP
-  kubectl get node -o wide
-  ...
+kubectl get node -o wide
 ```
-The worker nodes can be reached via the controlplane:
+
+The control plane and worker nodes can be reached via the SSH gateway:
 ```sh
-  ssh -i ~/.ssh/id_rsa root@WORKER_IP
+ssh -i ~/.ssh/id_rsa root@IP
 ```
 
+
 ### Remote Clients
 
-1. Download `/root/externaladmin.conf` from the controller to your local machine.
-2. Run `export KUBECONFIG=externaladmin.conf`, or add the contents of `externaladmin.conf` to your existing environment with `kubectl config set-cluster`, etc.
+1. Download `/root/.kube/config` from the gateway to your local machine
+2. Run `export KUBECONFIG=config`, or add the contents of the  `config`
+   to your existing environment with `kubectl config set-cluster`, etc.
+
+Tools such as `kubectl` should now work out-of-the-box, *provided the connections
+originate from the specified API client network*. This parameter may be updated
+as necessary, for example to support off-site administrators. In this case it
+is recommended to choose the smallest possible range.
 
-Tools such as `kubectl` should now work out-of-the-box, *provided the connections originate from the specified API client network*.
-This parameter may be updated as necessary, for example to support off-site administrators.
-In this case it is recommended to choose the smallest possible range.
 
 ## Example Usage
 
@@ -86,11 +107,14 @@ In this case it is recommended to choose the smallest possible range.
 
 ## Limitations
 
-- The external network, application credential, and key pair cannot be changed after the initial deployment.
-- Load balancers are not automatically removed prior to stack deletion, which blocks stack deletion.
-    If possible, delete these resources from Kubernetes beforehand.
-- Volumes are also not removed automatically (but do not block stack deletion).
-- Kubernetes upgrades and [certificate renewal](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/) must be performed manually.
-- Docker is the only supported CRI.
-    This will change starting with Kubernetes 1.24!
-- [Calico](https://projectcalico.docs.tigera.io/getting-started/kubernetes/) with VXLAN overlay is the only supported CNI.
+- The external network, application credential, and key pair cannot be changed
+  after the initial deployment
+- Load balancers are not automatically removed prior to stack deletion, which
+  blocks stack deletion. If possible, delete these resources from Kubernetes
+  beforehand
+- Volumes are also not removed automatically but do not block stack deletion
+- Kubernetes upgrades and [certificate renewal](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/)
+  must be performed manually
+- containerd is the only supported CRI
+- [Calico](https://projectcalico.docs.tigera.io/getting-started/kubernetes/)
+  with VXLAN overlay is the only supported CNI
diff --git a/common-config.sh b/common-config.sh
new file mode 100644
index 0000000000000000000000000000000000000000..f77e3b72b922dcba9fabe54dcbf2130c569740b6
--- /dev/null
+++ b/common-config.sh
@@ -0,0 +1,108 @@
+#!/bin/bash
+#
+# Configures instances with the basic requirements for a kubernetes
+# installation.
+#   Parameters (provided my mom-template.yaml):
+#     $KUBERNETES_VERSION  version or Kubernetes to install
+#     $CONTAINERD_VERSION  version of containerd to install
+#  Result:
+#     Node and containerd are configured to support kubernetes. Kubernetes is
+#     installed and locked to desired major version.
+apt-get update
+# --- networking ---
+cat <<___HERE >> /etc/modules-load.d/k8s.conf
+overlay
+br_netfilter
+___HERE
+modprobe overlay
+modprobe br_netfilter
+# sysctl params required by setup, params persist across reboots
+cat <<___HERE >> /etc/sysctl.d/k8s.conf
+net.bridge.bridge-nf-call-iptables  = 1
+net.bridge.bridge-nf-call-ip6tables = 1
+net.ipv4.ip_forward                 = 1
+___HERE
+sysctl --system
+
+# --- container runtime ---
+apt-get install -y \
+    ca-certificates \
+    curl \
+    gnupg \
+    lsb-release
+mkdir -p /etc/apt/keyrings
+curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
+echo \
+  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
+  https://download.docker.com/linux/debian $(lsb_release -cs) stable" \
+  >> /etc/apt/sources.list.d/docker.list
+apt-get update
+apt-get install -y "containerd.io=$CONTAINERD_VERSION"'.*'
+apt-mark hold containerd.io
+
+# --- Container network
+wget https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz
+mkdir -p /opt/cni/bin
+tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.2.0.tgz
+mkdir -p /etc/cni/net.d
+cat <<___HERE > /etc/cni/net.d/10-containerd-net.conflist
+{
+  "cniVersion": "1.0.0",
+  "name": "containerd-net",
+  "plugins": [
+    {
+      "type": "bridge",
+      "bridge": "cni0",
+      "isGateway": true,
+      "ipMasq": true,
+      "promiscMode": true,
+      "ipam": {
+        "type": "host-local",
+        "ranges": [
+          [{
+            "subnet": "10.88.0.0/16"
+          }],
+          [{
+            "subnet": "2001:4860:4860::/64"
+          }]
+        ],
+        "routes": [
+          { "dst": "0.0.0.0/0" },
+          { "dst": "::/0" }
+        ]
+      }
+    },
+    {
+      "type": "portmap",
+      "capabilities": {"portMappings": true}
+    }
+  ]
+}
+___HERE
+# ---
+sed -i '/^disabled_plugins/ s/"cri"// ' /etc/containerd/config.toml
+# from https://github.com/kubernetes/kubeadm/issues/2767#issuecomment-1344620047
+cat <<___HERE >> /etc/containerd/config.toml
+version = 2
+[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
+  runtime_type = "io.containerd.runc.v2"
+  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
+    SystemdCgroup = true
+[plugins."io.containerd.grpc.v1.cri"]
+  sandbox_image = "registry.k8s.io/pause:3.2"
+___HERE
+systemctl restart containerd
+
+# --- kubeadm/kubelet/kubectl ---
+apt-get update
+apt-get install -y apt-transport-https ca-certificates curl
+curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
+echo \
+    "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ \
+    kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list
+apt-get update
+apt-get install -y \
+    kubelet="$KUBERNETES_VERSION"'.*' \
+    kubeadm="$KUBERNETES_VERSION"'.*' \
+    kubectl="$KUBERNETES_VERSION"'.*'
+apt-mark hold kubelet kubeadm kubectl
diff --git a/control-plane.yaml b/control-plane.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..740bd18650d0701dfcd79bee865e07b330838aaf
--- /dev/null
+++ b/control-plane.yaml
@@ -0,0 +1,124 @@
+heat_template_version: rocky
+
+description: Additional control plane nodes for your kubernetes
+
+
+# note that this should depend on the router interface for the net
+parameters:
+  index:
+    type: string
+    label: node index
+    description: This node is the N'th control plane
+  net_id:
+    type: string
+    immutable: true
+    label: private network ID
+    description: ID of the network to create the instance on
+    constraints:
+      - custom_constraint: neutron.network
+  keypair:
+    type: string
+    immutable: true
+    label: ssh key
+    description: The name of the SSH key to add to gateway
+    constraints:
+      - custom_constraint: nova.keypair
+  secgroup:
+    type: string
+    label: security group
+    description: Security group for the kubernetes control plane
+    constraints:
+      - custom_constraint: neutron.security_group
+  flavor:
+    type: string
+    label: Control plane flavor
+    description: Resources to allocte for the control plane instances
+    constraints:
+      - custom_constraint: nova.flavor
+    default: mpcdf.large
+  join_cmd:
+    type: json
+    label: kubernetes join command
+    description: kubeadm command to join the cluster as controller
+  pool:
+    type: string
+    label: loadbalancer pool
+    description: load balanced pool of control pane nodes
+    constraints:
+      - custom_constraint: octavia.pool
+  user_data:
+    type: string
+    label: common Kubernetes onfiguration
+    description: Configuration element to enable the machine to run kubernetes
+  prefix:
+    type: string
+    label: Resource name prefix
+    description: This string is prepended to all resource names
+    default: "ssh"
+  suffix:
+    type: string
+    label: Resource name suffix
+    description: This string is appended to all resource names
+    default: ""
+  scheduler-hint:
+    type: string
+    label: instances placement policy
+    description: Hint to OpenStack scheduler on how to place the instances
+
+resources:
+  port:
+    type: OS::Neutron::Port
+    properties:
+      name:
+        list_join:
+          - "-"
+          - - {get_param: prefix}
+            - "control-plane-port"
+            - {get_param: index}
+            - {get_param: suffix}
+      network: {get_param: net_id}
+      security_groups: [{get_param: secgroup}]
+  user-data:
+    type: OS::Heat::MultipartMime
+    properties:
+      parts:
+        - config: {get_param: user_data}
+        - config: 
+            str_replace:
+              params:
+                $JOIN_CMD: {get_param: join_cmd}
+              template: |
+                #!/bin/bash
+                apt-get install -y jq
+                $(echo '$JOIN_CMD' | jq -r '."1"')
+  instance:
+    type: OS::Nova::Server
+    properties:
+      name:
+        list_join:
+          - "-"
+          - - {get_param: prefix}
+            - "control-plane"
+            - {get_param: index}
+            - {get_param: suffix}
+      image: Debian 11
+      flavor: {get_param: flavor}
+      key_name: {get_param: keypair}
+      networks:
+        - port: {get_resource: port}
+      scheduler_hints:
+        group: { get_param: scheduler-hint }
+      user_data: {get_resource: user-data}
+      user_data_update_policy: IGNORE
+      user_data_format: "SOFTWARE_CONFIG"
+  pool-member:
+    type: OS::Octavia::PoolMember
+    properties:
+      address: {get_attr: [instance, first_address]}
+      pool: {get_param: pool}
+      protocol_port: 6443
+
+outputs:
+  ip_address:
+    description: IP address of the SSH gateway
+    value: {get_attr: [instance, first_address]}
diff --git a/controller-config.sh b/controller-config.sh
new file mode 100644
index 0000000000000000000000000000000000000000..358f0b31c0d5239e6fd912256a3f52dbb4345ee8
--- /dev/null
+++ b/controller-config.sh
@@ -0,0 +1,103 @@
+#!/bin/bash
+#
+# Configures the first kubernetes controller. Notifies stack wait conditions
+# on completion.
+#   Parameters:
+#     $WORKER_KEY_PRIVATE:             ssh private key
+#     $WORKER_KEY_PUBLIC:              ssh public key
+#     $SUFFIX:                         unique string identifying the kubernetes
+#     $CONTROLPLANE_IP:                IP to reach the Kubernetes API from outside
+#     $APPLICATION_CREDENTIAL_ID:      OpenStack application credential ID
+#     $APPLICATION_CREDENTIAL_SECRET:  Application credential secret
+#     $SUBNET_ID:                      Kubernetes cluster subnet
+#     $FLOATING_NETWORK_ID:            ID of network providing FIPs
+#     $WC_NOTIFY:                      command to notify worker creation
+#     $CONTROLLER_WC_NOTIFY:           command to notify controller creation
+#     $KUBERNETES_VERSION:             Kubernetes version in use
+#
+echo "$WORKER_KEY_PRIVATE" > /root/.ssh/id_rsa
+echo "$WORKER_KEY_PUBLIC" > /root/.ssh/id_rsa.pub
+chmod 600 /root/.ssh/id_rsa
+
+cat > /tmp/kubeadm.yaml <<EOF
+apiVersion: kubelet.config.k8s.io/v1beta1
+kind: KubeletConfiguration
+cgroupDriver: systemd
+---
+apiVersion: kubeadm.k8s.io/v1beta3
+kind: InitConfiguration
+nodeRegistration:
+  kubeletExtraArgs:
+    cloud-provider: "external"
+---
+apiVersion: kubeadm.k8s.io/v1beta3
+kind: ClusterConfiguration
+clusterName: "$SUFFIX"
+apiServer:
+  extraArgs:
+    cloud-provider: "external"
+  certSANs:
+    - "$CONTROLPLANE_IP"
+controlPlaneEndpoint: "$CONTROLPLANE_IP:6443"
+controllerManager:
+  extraArgs:
+    cloud-provider: "external"
+networking:
+  podSubnet: "10.244.0.0/16"
+EOF
+
+kubeadm init --upload-certs --config /tmp/kubeadm.yaml |tee /root/kubeinit.log
+control_plane_join_cmd=$(grep -B2 '\-\-control-plane' /root/kubeinit.log | tr -d '\n\t\\\' | sed 's/^ *//')
+certificate_key=$(grep '\-\-certificate-key' /root/kubeinit.log | sed 's/.*certificate-key //')
+kubeadm init phase upload-certs --upload-certs --certificate-key "$certificate_key"
+cp /etc/kubernetes/admin.conf /root/externaladmin.conf
+echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /root/.bashrc
+export KUBECONFIG=/etc/kubernetes/admin.conf
+curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico-vxlan.yaml \
+    | sed 's/CrossSubnet/Always/' \
+    | kubectl apply -f -
+
+cat > /tmp/cloud.conf <<EOF
+[Global]
+auth-url=https://hpccloud.mpcdf.mpg.de:13000/v3
+region=regionOne
+application-credential-id=$APPLICATION_CREDENTIAL_ID
+application-credential-secret=$APPLICATION_CREDENTIAL_SECRET
+[LoadBalancer]
+subnet-id=$SUBNET_ID
+floating-network-id=$FLOATING_NETWORK_ID
+create-monitor=true
+[BlockStorage]
+ignore-volume-az=true
+EOF
+
+cat > /tmp/cinder.yaml <<EOF
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+  name: csi-sc-cinderplugin
+  annotations:
+    storageclass.kubernetes.io/is-default-class: "true"
+provisioner: cinder.csi.openstack.org
+parameters:
+  availability: nova
+EOF
+
+apt install -y git
+git clone \
+    --single-branch \
+    --no-tags \
+    --depth 1 \
+    -b 'release-$KUBERNETES_VERSION' \
+    https://github.com/kubernetes/cloud-provider-openstack.git \
+    /tmp/cloud-provider-openstack
+rm /tmp/cloud-provider-openstack/manifests/cinder-csi-plugin/csi-secret-cinderplugin.yaml
+
+kubectl create secret -n kube-system generic cloud-config --from-file=/tmp/cloud.conf
+kubectl apply -f /tmp/cloud-provider-openstack/manifests/controller-manager/cloud-controller-manager-roles.yaml
+kubectl apply -f /tmp/cloud-provider-openstack/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
+kubectl apply -f /tmp/cloud-provider-openstack/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml
+kubectl apply -f /tmp/cloud-provider-openstack/manifests/cinder-csi-plugin
+kubectl apply -f /tmp/cinder.yaml
+$CONTROLLER_WC_NOTIFY --data-binary '{ "status": "SUCCESS", "data": "'"${control_plane_join_cmd}"'" }'
+$WC_NOTIFY --data-binary '{ "status": "SUCCESS", "data": "'"$(kubeadm token create --print-join-command --ttl 0)"'" }'
diff --git a/gateway.yaml b/gateway.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..f9a5cbb514957e17ea37d9b2e2616bd7113dab48
--- /dev/null
+++ b/gateway.yaml
@@ -0,0 +1,202 @@
+heat_template_version: rocky
+
+description: A ssh gateway into your cluster
+
+
+# note that this should depend on the router interface for the net
+parameters:
+  net_id:
+    type: string
+    immutable: true
+    label: private network ID
+    description: ID of the network to create the instance on
+    constraints:
+      - custom_constraint: neutron.network
+  keypair:
+    type: string
+    immutable: true
+    label: ssh key
+    description: The name of the SSH key to add to gateway
+    constraints:
+      - custom_constraint: nova.keypair
+  external_network:
+    type: string
+    immutable: true
+    label: External network
+    description: ID of the network to allocate the floating IP on
+    default: cloud-public
+    constraints:
+      - allowed_values: [cloud-public, cloud-local-float]
+  prefix:
+    type: string
+    label: Resource name prefix
+    description: This string is prepended to all resource names
+    default: "ssh"
+  suffix:
+    type: string
+    label: Resource name suffix
+    description: This string is appended to all resource names
+    default: ""
+  cluster_prikey:
+    type: string
+    label: Private SSH key
+    description: Private SSH key granitng access to cluster resources
+    default: ""
+  cluster_pubkey:
+    type: string
+    label: Public SSH key
+    description: Public SSH key corresponding the the private cluster key
+    default: ""
+  control-plane-0-ip:
+    type: string
+    label: Control plane node IP
+    description: IP to SCP kubectl config from
+    default: ""
+
+resources:
+  secgroup:
+    type: OS::Neutron::SecurityGroup
+    properties:
+      name:
+        list_join:
+          - "-"
+          - [{get_param: prefix}, "gateway-secgroup", {get_param: suffix}]
+      rules:
+        - protocol: tcp
+          remote_ip_prefix: 130.183.0.0/16
+          port_range_min: 22
+          port_range_max: 22
+        - protocol: tcp
+          remote_ip_prefix: 10.0.0.0/8
+          port_range_min: 22
+          port_range_max: 22
+  port:
+    type: OS::Neutron::Port
+    properties:
+      name:
+        list_join:
+          - "-"
+          - [{get_param: prefix}, "gateway", "port", {get_param: suffix}]
+      network: {get_param: net_id}
+      security_groups: [{get_resource: secgroup}]
+  floating-ip:
+    type: OS::Neutron::FloatingIP
+    properties:
+      floating_network: {get_param: external_network}
+      port_id: {get_resource: port}
+  user-data:
+    type: OS::Heat::CloudConfig
+    properties:
+      cloud_config:
+        package_update: true
+        package_upgrade: true
+        packages:
+          - helm
+          - kubectl
+        write_files:
+          - path: /root/.ssh/id_rsa
+            content: {get_param: cluster_prikey}
+            owner: root:root
+            permissions: 0o400
+          - path: /root/.ssh/id_rsa.pub
+            content: {get_param: cluster_pubkey}
+            owner: root:root
+            permissions: 0o444
+          - path: /etc/apt/keyrings/kubernetes-archive-keyring.gpg
+            content: |
+              xsBNBGKItdQBCADWmKTNZEYWgXy73FvKFY5fRro4tGNa4Be4TZW3wZpct9Cj8EjykU7S9EPoJ3Ed
+              KpxFltHRu7QbDi6LWSNA4XxwnudQrYGxnxx6Ru1KBHFxHhLfWsvFcGMwit/znpxtIt9UzqCm2YTE
+              W5NUnzQ4rXYqVQK2FLG4weYJ5bKwkY+ZsnRJpzxdHGJ0pBiqwkMT8bfQdJymUBown+SeuQ2HEqfj
+              VMsIRe0dweD2PHWeWo9fTXsz1Q5abiGckyOVyoN9//DgSvLUocUcZsrWvYPaN+o8lXTO3GYFGNVs
+              x069rxarkeCjOpiQOWrQmywXISQudcusSgmmgfsRZYW7FDBy5MQrABEBAAHNUVJhcHR1cmUgQXV0
+              b21hdGljIFNpZ25pbmcgS2V5IChjbG91ZC1yYXB0dXJlLXNpZ25pbmcta2V5LTIwMjItMDMtMDct
+              MDhfMDFfMDEucHViKcLAYgQTAQgAFgUCYoi11AkQtT3IDRPt7wUCGwMCGQEAAMGoCAB8QBNIIN3Q
+              2D3aahrfkb6axd55zOwR0tnriuJRoPHoNuorOpCv9aWMMvQACNWkxsvJxEF8OUbzhSYjAR534RDi
+              gjTetjK2i2wKLz/kJjZbuF4ZXMynCm40eVm1XZqU63U9XR2RxmXppyNpMqQO9LrzGEnNJuh23ica
+              ZY6no12axymxcle/+SCmda8oDAfa0iyA2iyg/eU05buZv54MC6RB13QtS+8vOrKDGr7RYp/VYvQz
+              YWm+ck6DvlaVX6VB51BkLl23SQknyZIJBVPm8ttU65EyrrgG1jLLHFXDUqJ/RpNKq+PCzWiyt4uy
+              3AfXK89RczLu3uxiD0CQI0T31u/IzsBNBGKItdQBCADIMMJdRcg0Phv7+CrZz3xRE8Fbz8AN+YCL
+              igQeH0B9lijxkjAFr+thB0IrOu7ruwNY+mvdP6dAewUur+pJaIjEe+4s8JBEFb4BxJfBBPuEbGSx
+              bi4OPEJuwT53TMJMEs7+gIxCCmwioTggTBp6JzDsT/cdBeyWCusCQwDWpqoYCoUWJLrUQ6dOlI7s
+              6p+iIUNIamtyBCwb4izs27HdEpX8gvO9rEdtcb7399HyO3oD4gHgcuFiuZTpvWHdn9WYwPGM6npJ
+              NG7crtLnctTR0cP9KutSPNzpySeAniHx8L9ebdD9tNPCWC+OtOcGRrcBeEznkYh1C4kzdP1ORm5u
+              pnknABEBAAHCwF8EGAEIABMFAmKItdQJELU9yA0T7e8FAhsMAABJmAgAhRPk/dFj71bU/UTXrkEk
+              ZZzE9JzUgan/ttyRrV6QbFZABByf4pYjBj+yLKw3280//JWurKox2uzEq1hdXPedRHICRuh1Fjd0
+              0otaQ+wGF3kY74zlWivB6Wp6tnL9STQ1oVYBUv7HhSHoJ5shELyedxxHxurUgFAD+pbFXIiK8cnA
+              HfXTJMcrmPpC+YWEC/DeqIyEcNPkzRhtRSuERXcq1n+KJvMUAKMD/tezwvujzBaaSWapmdnGmtRj
+              jL7IxUeGamVWOwLQbUr+34MwzdeJdcL8fav5LA8Uk0ulyeXdwiAK8FKQsixI+xZvz7HUs8ln4pZw
+              Gw/TpvO9cMkHogtgzQ==
+            encoding: b64
+            owner: root:root
+            permissions: 0o644
+          - path: /etc/apt/sources.list.d/kubernetes.list
+            content: >
+              deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg]
+              https://apt.kubernetes.io/ kubernetes-xenial main
+            owner: root:root
+            permissions: 0o644
+          - path: /usr/share/keyrings/helm.gpg
+            content: |
+              mQINBF6yP7IBEADWk4aijQ7Vhj7wn2oz+8asnfzsD0+257qjWy1m+cN4RP6T2NBGS2M5+vzbsKNm
+              GAja8jOpo46pHo/SCdc8Bwv+QHH+JbuBbDNEHwIBGV5p+ZRETiHql8UsyUAPCWinKR6evZrANCBE
+              zXtOEVJ4thuPoBuZkteKNTdPlOg9MBqD5zz+4iQX2CJJNW7+1sxAAVozHJxjJbu6c84yPvNFAiCA
+              ct+x5WJZFJWuO+l55vl6va8cV7twDgHomk+1Q7w00Z0gh28Pe1yfvvw3N+pCSYn88mSgZtdP3wz3
+              pABkMe4wMobNWuyXbIjGMuFDs7vGBY6UCL6alI/VC7rrSZqJZjntuoNI0Xlfc3BjUHWzinlbA7UF
+              k5LvqZO61V439Wm4x2n1V+4Kj/nPwtgBrNghaeDjxWLlgqaqynltSpXYnv2qGWYLRUb9WFymbYCJ
+              0piqRdNVNNI8Ht9nFaya6qjDcIxFwFMF9QcrECG1HCK1M5JjdJpzr6JqZ27/2ZG7DhabArSR5aoy
+              BqhCylJfXugneDhitmilJiQd5EzefjiDO29EuBSMwkAs+IKg9jxGyI47m3+ITWhMDWTFTYBF/O69
+              iKXfFvf4zrbfGMcf2w8vIOEBU3eTSYoYRhHXROedwcYpaVGJmsaT38QTSMqWTn12zlvmW5f6mEI5
+              LQq398gN9eiWDwARAQABtERIZWxtIGhvc3RlZCBieSBCYWx0byAoUmVwb3NpdG9yeSBzaWduaW5n
+              KSA8Z3Bnc2VjdXJpdHlAZ2V0YmFsdG8uY29tPokCVAQTAQoAPhYhBIG/gy4vGc0qoEcZWSlKxIJ8
+              GhaKBQJesj+yAhsvBQkSzAMABQsJCAcCBhUKCQgLAgQWAgMBAh4BAheAAAoJEClKxIJ8GhaKPHEP
+              /RRzvYCetoLeIj5FtedbeumGcWaJj97L4R1j7iK0dc0uvg0T5JeMDttAt69dFPHyB0kR1BLSwgJB
+              hYCtvwalvD/g7DmL5l5HIM7o/VrkXDay1PeewkCclA18y2wNM5EXKAuoFX5FMkRpTtSQhMMllbKs
+              NNSvwvEZWvqMQlwJ/2HgNoVl2NtfY65UXHvIV2nTTmCVDq4OYBlHoUX5rRE7fOgFZ+u6Su7yopTY
+              y13yY8ZVDNf/qNUWqA41gRYnwYtSq1DogHq1dcyr/SW/pFsn4n4LjG+38CIkSjFKOeusg2KPybZx
+              l/z0/l0Yv4pTaa91rh1hGWqhvYDbLr2XqvI1wpcsIRPpU8lasycyQ8EeI4B5FVelea2Z6rvGtMG9
+              2wVNCZ6YMYzpvRA9iRgve4J4ztlCwr0Tm78vY/vZfU5jkPW1VOXJ6nW/RJuc2mecuj8YpJtioNVP
+              bfxE/CjCCnGEnqn511ZYqKGd+BctqoFlWeSihHsttuSqJoqjOmt75MuN6zUJ0s3Ao+tzCmYkQzn2
+              LUwnYisioyTW4gMtlh/wsU6Rmimss5doyG2Mcc0QfstXLMthVkrBpbW4XT+Q6aTGUMlMv1BhKycD
+              UmewI2AMNth5HoodiEt18+X26+Q2exojaMHOCdkUJ+C44XPDy6EvG4RyO4bILHz5obD/9QZO/lzK
+            encoding: b64
+            owner: root:root
+            permissions: 0o644
+          - path: /etc/apt/sources.list.d/helm-stable-debian.list
+            content: >
+              deb [signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main
+            owner: root:root
+            permissions: 0o644
+        runcmd:
+          - [chmod, '0700', /root/.ssh]
+          - [update-locale, "LANG=en_US.UTF-8"]
+          - [locale-gen, --purge, "en_US.UTF-8"]
+          - [dpkg-reconfigure, --frontend, noninteractive, locales]
+          - [mkdir, "/root/.kube"]
+          - - scp
+            - -o
+            - "StrictHostKeyChecking=no"
+            - list_join: ["", [{get_param: control-plane-0-ip},":externaladmin.conf"]]
+            - "/root/.kube/config"
+          - [chmod, "0400", "/root/.kube/config"]
+          - [chmod, "0500", "/root/.kube"]
+
+  gateway:
+    type: OS::Nova::Server
+    properties:
+      name:
+        list_join:
+          - "-"
+          - [{get_param: prefix}, "ssh-gateway", {get_param: suffix}]
+      image: Debian 11
+      flavor: mpcdf.small
+      key_name: {get_param: keypair}
+      user_data: {get_resource: user-data}
+      user_data_format: SOFTWARE_CONFIG
+      user_data_update_policy: REPLACE
+      networks:
+        - port: {get_resource: port}
+
+outputs:
+  ip_address:
+    description: IP address of the SSH gateway
+    value: {get_attr: [floating-ip, floating_ip_address]}
diff --git a/mom-template.yaml b/mom-template.yaml
index 06e570e9abdddb46f7919721c5a8abd77c110920..9f7e516db1c2853116aa032963734ff665d21ef9 100644
--- a/mom-template.yaml
+++ b/mom-template.yaml
@@ -21,8 +21,9 @@ parameters:
     description: Version of Kubernetes to install.  Changes affect new workers only.
     type: string
     constraints:
-      - allowed_values: [ 1.21, 1.22, 1.23 ]
-    default: 1.23
+      - allowed_values: [ 1.25, 1.26 ]
+    default: 1.26
+    immutable: true
 
   application_credential_id:
     label: Application Credential ID
@@ -42,7 +43,7 @@ parameters:
     description: Network used by the controlplane and load balancers.
     type: string
     constraints:
-      - allowed_values: [ cloud-nonpublic, cloud-public ]
+      - allowed_values: [ cloud-local-float, cloud-public ]
 #      - custom_constraint: neutron.network (does not show external networks)
     default: cloud-public
     immutable: true
@@ -85,6 +86,12 @@ parameters:
     type: string
     default: 130.183.0.0/16
 
+  extra_controller_count:
+    type: number
+    label: Extra control plane nodes
+    description: Number of control plane nodes to create beyond the first
+    default: 2
+
 resources:
   k8s-net:
     type: OS::Neutron::Net
@@ -123,22 +130,46 @@ resources:
         - remote_mode: remote_group_id
         - protocol: icmp
         - protocol: tcp
-          remote_ip_prefix: 130.183.0.0/16
-          port_range_min: 22
-          port_range_max: 22
-        - protocol: tcp
-          remote_ip_prefix: 10.0.0.0/8
-          port_range_min: 22
-          port_range_max: 22
-        - protocol: tcp
-          remote_ip_prefix: { get_param: client_cidr }
-          port_range_min: 6443
-          port_range_max: 6443
-        - protocol: tcp
-          remote_ip_prefix: 192.168.0.0/24
-          port_range_min: 30000
-          port_range_max: 32767
-
+  k8s-loadbalancer:
+    type: OS::Octavia::LoadBalancer
+    depends_on: k8s-router-interface
+    properties:
+      name: { list_join: [ "-", [ "k8s-loadbalancer", { get_resource: k8s-suffix } ] ] }
+      vip_subnet: {get_resource: k8s-subnet}
+  k8s-pool:
+    type: OS::Octavia::Pool
+    properties:
+      name: { list_join: [ "-", [ "k8s-pool", { get_resource: k8s-suffix } ] ] }
+      lb_algorithm: ROUND_ROBIN
+      loadbalancer: {get_resource: k8s-loadbalancer}
+      protocol: TCP
+  k8s-listener:
+    type: OS::Octavia::Listener
+    properties:
+      name: { list_join: [ "-", [ "k8s-listener", { get_resource: k8s-suffix } ] ] }
+      allowed_cidrs:
+        - {get_param: client_cidr}
+        - {get_attr: [k8s-subnet, cidr]}
+        - 130.183.0.0/16
+        - 10.0.0.0/8
+      protocol: TCP
+      protocol_port: 6443
+      default_pool: {get_resource: k8s-pool}
+      loadbalancer: {get_resource: k8s-loadbalancer}
+  k8s-healthmonitor:
+    type: OS::Octavia::HealthMonitor
+    properties:
+      delay: 5
+      max_retries: 4
+      timeout: 10
+      type: TCP
+      pool: {get_resource: k8s-pool}
+  k8s-floating-ip:
+    type: OS::Neutron::FloatingIP
+    depends_on: k8s-loadbalancer
+    properties:
+      floating_network: {get_param: external_network}
+      port_id: {get_attr: [k8s-loadbalancer, vip_port_id]}
   k8s-controller-port:
     type: OS::Neutron::Port
     properties:
@@ -146,35 +177,26 @@ resources:
       network: { get_resource: k8s-net }
       security_groups:
         - { get_resource: k8s-secgroup }
-
-  k8s-controller-fip:
-    type: OS::Neutron::FloatingIP
-    depends_on: k8s-router-interface  # prevent a race condition -- port could get created before the router actually supports fips on this network
-    properties:
-      floating_network: { get_param: external_network }
-      value_specs: { "description": { list_join: [ "-", [ { get_param: OS::stack_name }, "controlplane" ] ] } }
-#      port_id: { get_resource: k8s-controller-port } (associate later due to a lower-level networking bug -- see following resource)
-
-  k8s-controller-fip-association:
-    type: OS::Neutron::FloatingIPAssociation
-    depends_on: k8s-join-wait  # wait until after the setup script has run
-    properties:
-      floatingip_id: { get_resource: k8s-controller-fip }
-      port_id: { get_resource: k8s-controller-port }
-
-  k8s-controller:
+  k8s-control-plane-0:
     type: OS::Nova::Server
     properties:
-      name: { list_join: [ "-", [ "k8s-controller", { get_resource: k8s-suffix } ] ] }
-      image: "Ubuntu 22.04"
+      name: { list_join: [ "-", [ "k8s-control-plane-0", { get_resource: k8s-suffix } ] ] }
+      image: Debian 11
       flavor: { get_param: controller_flavor }
-      key_name: { get_param: keypair }
+      key_name: { get_resource: k8s-worker-key }
       networks:
         - port: { get_resource: k8s-controller-port }
+      scheduler_hints:
+        group: { get_resource: k8s-control-plane }
       user_data_update_policy: IGNORE
       user_data_format: SOFTWARE_CONFIG
       user_data: { get_resource: k8s-controller-config }
-
+  k8s-poolmember:
+    type: OS::Octavia::PoolMember
+    properties:
+      address: {get_attr: [k8s-control-plane-0, first_address]}
+      pool: {get_resource: k8s-pool}
+      protocol_port: 6443
   k8s-worker:
     type: OS::Heat::ResourceGroup
     update_policy:
@@ -186,9 +208,9 @@ resources:
         type: OS::Nova::Server
         properties:
           name: { list_join: [ "-", [ "k8s-worker-%index%", { get_resource: k8s-suffix } ] ] }
-          image: "Ubuntu 22.04"
+          image: Debian 11
           flavor: { get_param: worker_flavor }
-          key_name: { get_resource: k8s-worker-key }
+          key_name: {get_resource: k8s-worker-key}
           networks:
             - subnet: { get_resource: k8s-subnet }
           security_groups:
@@ -199,6 +221,36 @@ resources:
           user_data_format: SOFTWARE_CONFIG
           user_data: { get_resource: k8s-worker-config }
 
+  k8s-control-plane-1:
+    type: control-plane.yaml
+    properties:
+      index: 1
+      net_id: {get_resource: k8s-net}
+      keypair: {get_resource: k8s-worker-key}
+      secgroup: {get_resource: k8s-secgroup}
+      flavor: {get_param: controller_flavor}
+      join_cmd: {get_attr: [k8s-controller-join-wait, data]}
+      user_data: {get_resource: k8s-common-config}
+      pool: {get_resource: k8s-pool}
+      prefix: k8s
+      suffix: {get_resource: k8s-suffix}
+      scheduler-hint: {get_resource: k8s-control-plane}
+
+  k8s-control-plane-2:
+    type: control-plane.yaml
+    properties:
+      index: 2
+      net_id: {get_resource: k8s-net}
+      keypair: {get_resource: k8s-worker-key}
+      secgroup: {get_resource: k8s-secgroup}
+      flavor: {get_param: controller_flavor}
+      join_cmd: {get_attr: [k8s-controller-join-wait, data]}
+      user_data: {get_resource: k8s-common-config}
+      pool: {get_resource: k8s-pool}
+      prefix: k8s
+      suffix: {get_resource: k8s-suffix}
+      scheduler-hint: {get_resource: k8s-control-plane}
+
   k8s-worker-key:
     type: OS::Nova::KeyPair
     properties:
@@ -211,6 +263,12 @@ resources:
       name: { list_join: [ "-", [ "k8s-worker-group", { get_resource: k8s-suffix } ] ] }
       policies: [ "soft-anti-affinity" ]
 
+  k8s-control-plane:
+    type: OS::Nova::ServerGroup
+    properties:
+      name: { list_join: [ "-", [ "k8s-control-plane", { get_resource: k8s-suffix } ] ] }
+      policies: [ "soft-anti-affinity" ]
+
   k8s-common-config:
     type: OS::Heat::SoftwareConfig
     properties:
@@ -218,33 +276,8 @@ resources:
         str_replace:
           params:
             $KUBERNETES_VERSION: { get_param: kubernetes_version }
-          template: |
-            #!/bin/bash
-            apt-get purge -y snapd ufw ubuntu-advantage-tools
-            
-            curl https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
-            echo "deb [signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu jammy stable" > /etc/apt/sources.list.d/docker.list
-            apt-get update
-            apt-get install -y containerd.io docker-ce docker-ce-cli
-            apt-mark hold containerd.io docker-ce docker-ce-cli
-            
-            cat > /etc/docker/daemon.json <<EOF
-            {
-              "exec-opts": ["native.cgroupdriver=systemd"],
-              "log-driver": "json-file",
-              "log-opts": {
-                "max-size": "100m"
-              },
-              "storage-driver": "overlay2"
-            }
-            EOF
-            systemctl restart docker
-            
-            curl https://packages.cloud.google.com/apt/doc/apt-key.gpg -o /etc/apt/keyrings/kubernetes.gpg
-            echo "deb [signed-by=/etc/apt/keyrings/kubernetes.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
-            apt-get update
-            apt-get install -y kubelet='$KUBERNETES_VERSION.*' kubeadm='$KUBERNETES_VERSION.*' kubectl='$KUBERNETES_VERSION.*'
-            apt-mark hold kubelet kubeadm kubectl
+            $CONTAINERD_VERSION: 1.6
+          template: {get_file: common-config.sh}
 
   k8s-controller-config:
     type: OS::Heat::MultipartMime
@@ -257,91 +290,15 @@ resources:
                 $WORKER_KEY_PRIVATE: { get_attr: [ k8s-worker-key, private_key ] }
                 $WORKER_KEY_PUBLIC: { get_attr: [ k8s-worker-key, public_key ] }
                 $SUFFIX: { get_resource: k8s-suffix }
-                $CONTROLPLANE_IP: { get_attr: [ k8s-controller-fip, floating_ip_address ] }
+                $CONTROLPLANE_IP: { get_attr: [ k8s-floating-ip, floating_ip_address ] }
                 $APPLICATION_CREDENTIAL_ID: { get_param: application_credential_id }
                 $APPLICATION_CREDENTIAL_SECRET: { get_param: application_credential_secret }
                 $SUBNET_ID: { get_resource: k8s-subnet }
                 $FLOATING_NETWORK_ID: { get_attr: [ k8s-router, external_gateway_info, network_id ] }
                 $WC_NOTIFY: { get_attr: [ k8s-join-handle, curl_cli ] }
-              template: |
-                #!/bin/bash
-                echo "$WORKER_KEY_PRIVATE" > /root/.ssh/id_rsa
-                echo "$WORKER_KEY_PUBLIC" > /root/.ssh/id_rsa.pub
-                chmod 600 /root/.ssh/id_rsa
-                
-                cat > /tmp/kubeadm.yaml <<EOF
-                apiVersion: kubelet.config.k8s.io/v1beta1
-                kind: KubeletConfiguration
-                cgroupDriver: systemd
-                ---
-                apiVersion: kubeadm.k8s.io/v1beta2
-                kind: InitConfiguration
-                nodeRegistration:
-                  kubeletExtraArgs:
-                    cloud-provider: "external"
-                ---
-                apiVersion: kubeadm.k8s.io/v1beta2
-                kind: ClusterConfiguration
-                clusterName: "$SUFFIX"
-                apiServer:
-                  extraArgs:
-                    cloud-provider: "external"
-                  certSANs:
-                    - "$CONTROLPLANE_IP"
-                controllerManager:
-                  extraArgs:
-                    cloud-provider: "external"
-                networking:
-                  podSubnet: "10.244.0.0/16"
-                EOF
-                
-                kubeadm init --config /tmp/kubeadm.yaml
-                sed "s/server: https:.*/server: https:\/\/$CONTROLPLANE_IP:6443/" /etc/kubernetes/admin.conf > /root/externaladmin.conf
-                echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /root/.bashrc
-                export KUBECONFIG=/etc/kubernetes/admin.conf
-                curl -s https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico-vxlan.yaml \
-                    | sed 's/CrossSubnet/Always/' \
-                    | kubectl apply -f -
-                
-                cat > /tmp/cloud.conf <<EOF
-                [Global]
-                auth-url=https://hpccloud.mpcdf.mpg.de:13000/v3
-                region=regionOne
-                application-credential-id=$APPLICATION_CREDENTIAL_ID
-                application-credential-secret=$APPLICATION_CREDENTIAL_SECRET
-                [LoadBalancer]
-                subnet-id=$SUBNET_ID
-                floating-network-id=$FLOATING_NETWORK_ID
-                create-monitor=true
-                [BlockStorage]
-                ignore-volume-az=true
-                EOF
-                
-                cat > /tmp/cinder.yaml <<EOF
-                apiVersion: storage.k8s.io/v1
-                kind: StorageClass
-                metadata:
-                  name: csi-sc-cinderplugin
-                  annotations:
-                    storageclass.kubernetes.io/is-default-class: "true"
-                provisioner: cinder.csi.openstack.org
-                parameters:
-                  availability: nova
-                EOF
-                
-                mkdir -p /tmp/cloud-provider-openstack
-                curl -sL 'https://github.com/kubernetes/cloud-provider-openstack/archive/refs/tags/v1.25.3.tar.gz' \
-                    | tar zxv --strip-components=1 -C /tmp/cloud-provider-openstack
-                rm /tmp/cloud-provider-openstack/manifests/cinder-csi-plugin/csi-secret-cinderplugin.yaml
-                
-                kubectl create secret -n kube-system generic cloud-config --from-file=/tmp/cloud.conf
-                kubectl apply -f /tmp/cloud-provider-openstack/manifests/controller-manager/cloud-controller-manager-roles.yaml
-                kubectl apply -f /tmp/cloud-provider-openstack/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
-                kubectl apply -f /tmp/cloud-provider-openstack/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml
-                kubectl apply -f /tmp/cloud-provider-openstack/manifests/cinder-csi-plugin
-                kubectl apply -f /tmp/cinder.yaml
-                
-                $WC_NOTIFY --data-binary '{ "status": "SUCCESS", "data": "'"$(kubeadm token create --print-join-command --ttl 0)"'" }'
+                $CONTROLLER_WC_NOTIFY: { get_attr: [ k8s-controller-join-handle, curl_cli ] }
+                $KUBERNETES_VERSION: { get_param: kubernetes_version }
+              template: {get_file: controller-config.sh}
 
   k8s-worker-config:
     type: OS::Heat::MultipartMime
@@ -356,7 +313,6 @@ resources:
                 #!/bin/bash
                 apt-get install -y jq
                 $(echo '$JOIN_COMMAND' | jq -r '."1"')
-
   k8s-join-handle:
     type: OS::Heat::WaitConditionHandle
 
@@ -366,6 +322,14 @@ resources:
       handle: { get_resource: k8s-join-handle }
       timeout: 600
 
+  k8s-controller-join-handle:
+    type: OS::Heat::WaitConditionHandle
+  k8s-controller-join-wait:
+    type: OS::Heat::WaitCondition
+    properties:
+      handle: {get_resource: k8s-controller-join-handle}
+      timeout: 600
+
   k8s-suffix:
     type: OS::Heat::RandomString
     properties:
@@ -373,12 +337,26 @@ resources:
       character_classes:
         - class: lowercase
         - class: digits
+  k8s-gateway:
+    type: gateway.yaml
+    depends_on: k8s-join-wait
+    properties:
+      net_id: {get_resource: k8s-net}
+      keypair: {get_param: keypair}
+      external_network: {get_param: external_network}
+      prefix: k8s
+      suffix: {get_resource: k8s-suffix}
+      cluster_prikey: {get_attr: [k8s-worker-key, private_key]}
+      cluster_pubkey: {get_attr: [k8s-worker-key, public_key]}
+      control-plane-0-ip: {get_attr: [k8s-control-plane-0, first_address]}
 
 outputs:
-  controlplane_ip:
-    description: Controlplane IP address
-    value: { get_attr: [ k8s-controller-fip, floating_ip_address ] }
-
+  control_plane_0_ip:
+    description: Private IP of control plane node holding kubectl config
+    value: { get_attr: [ k8s-control-plane-0, first_address ] }
+  gateway_ip:
+    description: SSH gateway IP address
+    value: {get_attr: [k8s-gateway, ip_address]}
   resource_suffix:
     description: Unique suffix identifying resources belonging to this stack
     value: { get_attr: [ k8s-suffix, value ] }
diff --git a/step-by-step/README.md b/step-by-step/README.md
index ce8963e3bcfeb086280c47377a5fca63def0142f..7c000e78d3e3d265b38ba90df891c7d649c043ac 100644
--- a/step-by-step/README.md
+++ b/step-by-step/README.md
@@ -1,75 +1,373 @@
-# Step-by-step installation of Kubernetes 1.23 on the MPCDF HPC Cloud
+# Step-by-step installation of Kubernetes 1.25+ on the MPCDF HPC Cloud
 
-The procedure below can be used to deploy a production-ready Kubernetes cluster on the MPCDF [HPC Cloud](https://docs.mpcdf.mpg.de/doc/computing/cloud/), including "out-of-the-box" support for persistent storage and load balancers.
-The resulting cluster is intended to be functionally equivalent to the [templatized version](../README.md).
+The procedure below can be used to deploy a production-ready Kubernetes cluster
+on the MPCDF [HPC Cloud](https://docs.mpcdf.mpg.de/doc/computing/cloud/),
+including "out-of-the-box" support for persistent storage and load balancers.
+The resulting cluster is intended to be functionally equivalent to the
+[templatized version](../README.md).
 
 *Values in `$ALL_CAPS` should be customized before running the command.*
 
-## Procedure
-
-*Steps 0-1 can also be accomplished via the [dashboard](https://hpccloud.mpcdf.mpg.de/).*
-
-0. Create a private network and security group
-    ```sh
-    openstack network create k8s-net --mtu 1500
-    openstack subnet create k8s-subnet --network k8s-net --subnet-range 192.168.0.0/24 --dns-nameserver 130.183.9.32 --dns-nameserver 130.183.1.21
-    openstack router create k8s-router
-    openstack router set k8s-router --external-gateway $EXTERNAL_NETWORK
-    openstack router add subnet k8s-router k8s-subnet
-
-    openstack security group create k8s-secgroup
-    openstack security group rule create k8s-secgroup --remote-group k8s-secgroup
-    openstack security group rule create k8s-secgroup --protocol icmp
-    openstack security group rule create k8s-secgroup --protocol tcp --dst-port 22          --remote-ip 130.183.0.0/16
-    openstack security group rule create k8s-secgroup --protocol tcp --dst-port 22          --remote-ip 10.0.0.0/8
-    openstack security group rule create k8s-secgroup --protocol tcp --dst-port 6443        --remote-ip $CLIENT_CIDR  # optional: source range for remote kubectl, etc.
-    openstack security group rule create k8s-secgroup --protocol tcp --dst-port 30000:32767 --remote-ip 192.168.0.0/24
-    ```
-    You will need to request access to an external network such as *cloud-public*, which is reachable from the internet.
-    Be sure to use the same value in later steps.
-
-1. Deploy some servers, e.g. with the following openstack commands
-    ```sh
-    openstack server create k8s-controller --image "Ubuntu 22.04" --flavor $CONTROLLER_FLAVOR --network k8s-net --security-group k8s-secgroup --key-name $KEYNAME --user-data setup.sh
-    openstack server create k8s-worker     --image "Ubuntu 22.04" --flavor $WORKER_FLAVOR     --network k8s-net --security-group k8s-secgroup --key-name $KEYNAME --user-data setup.sh --min $NUM_WORKERS --max $NUM_WORKERS
-
-    openstack floating ip create $EXTERNAL_NETWORK --description $HOSTNAME  # description optional: set custom hostname in DNS
-    openstack server add floating ip k8s-controller $FLOATING_IP
-    ```
-    Recommended flavors for the controller and workers are *mpcdf.medium* and *mpcdf.large*, respectively.
-    For production consider requesting a high-availability flavor for the controller.
-
-2. Create the cluster
-    ```sh
-    # on the controller:
-      edit kubeadm.yaml  # optional: fill-in controller floating ip for remote kubectl, etc.
-
-      kubeadm init --config kubeadm.yaml
-      echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /root/.bashrc
-      export KUBECONFIG=/etc/kubernetes/admin.conf
-      curl https://projectcalico.docs.tigera.io/manifests/calico-vxlan.yaml | sed 's/CrossSubnet/Always/' | kubectl apply -f -
-
-    # on the workers:
-      $JOIN_COMMAND_FROM_KUBEADM_INIT
-      ```
-    *Tip:* Login to the controller via its floating IP or FQDN, and then the workers via their private addresses (include `ForwardAgent=yes` in the initial SSH command).
-
-3. Add cloud provider [integration](https://github.com/kubernetes/cloud-provider-openstack)
-    ```sh
-    openstack application credential create k8s-manager -f value -c id -c secret
-    openstack subnet show k8s-subnet -f value -c id
-    openstack network show $EXTERNAL_NETWORK -f value -c id
-
-    # on the controller:
-      edit cloud.conf  # fill-in application-credential-id, application-credential-secret, subnet-id, and floating-network-id based on the commands above
-
-      git clone https://github.com/kubernetes/cloud-provider-openstack.git
-      rm cloud-provider-openstack/manifests/cinder-csi-plugin/csi-secret-cinderplugin.yaml
-
-      kubectl create secret -n kube-system generic cloud-config --from-file=cloud.conf
-      kubectl apply -f cloud-provider-openstack/manifests/controller-manager/cloud-controller-manager-roles.yaml
-      kubectl apply -f cloud-provider-openstack/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
-      kubectl apply -f cloud-provider-openstack/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml
-      kubectl apply -f cloud-provider-openstack/manifests/cinder-csi-plugin
-      kubectl apply -f cinder.yml
-    ```
+*Instead of using the OpenStack CLI you can also use the
+[dashboard](https://hpccloud.mpcdf.mpg.de/).*
+
+## Variables
+Set the following variables to your preferences:
+```bash
+EXTERNAL_NETWORK="cloud-public"
+CONTROL_PLANE_FLAVOR="mpcdf.medium.ha"
+KEYNAME="___"
+CLUSTER_NAME="________" \
+WORKER_FLAVOR="mpcdf.large"
+NUM_WORKERS=3
+KUBERNETES_VERSION=1.26
+```
+This is meant to ease the use of the instructions below. You can, of course,
+enter values in place of using the variables below.
+
+The flavors above are what is recommended from the flavors made avaialble by
+default. If you need other hardware configurations to back your cluster, please
+get in touch with us at the [helpdesk](https://helpdesk.mpcdf.mpg.de).
+
+## Depoying the network
+
+
+### Create a private network
+
+You will need to request access to an external network such as *cloud-public*,
+which is reachable from the internet. Be sure to use the same value in later
+steps.
+```bash
+openstack network create k8s-net --mtu 1500
+openstack subnet create k8s-subnet \
+    --network k8s-net \
+    --subnet-range 192.168.0.0/24 \
+    --dns-nameserver 130.183.9.32 \
+    --dns-nameserver 130.183.1.21
+openstack router create k8s-router --external-gateway "${EXTERNAL_NETWORK}"
+openstack router add subnet k8s-router k8s-subnet
+```
+
+
+### Security groups (i.e. Firewall rules)
+
+Opening the traffic in our private network is safe as access is managed through
+the load balancer and (optionally) the SSH  gateway.
+```sh
+openstack security group create k8s-secgroup
+openstack security group rule create k8s-secgroup \
+    --remote-ip 0.0.0.0/0 \
+    --protocol icmp \
+    --description "Allow all internal ICMP traffic"
+openstack security group rule create k8s-secgroup \
+    --remote-ip 0.0.0.0/0 \
+    --protocol tcp \
+    --description "Allow all internal TCP traffic"
+```
+
+
+### Create control-plane network ports
+
+These network ports will be attached to the load balancer and the control plane
+nodes. We select the IP addresses to make the rest of the steps easier to
+follow:
+```bash
+openstack port create k8s-control-plane \
+    --network k8s-net \
+    --fixed-ip subnet=k8s-subnet,ip-address=192.168.0.3 \
+    --security-group k8s-secgroup
+openstack port create k8s-control-plane-0 \
+    --network k8s-net \
+    --fixed-ip subnet=k8s-subnet,ip-address=192.168.0.4 \
+    --security-group k8s-secgroup
+openstack port create k8s-control-plane-1 \
+    --network k8s-net \
+    --fixed-ip subnet=k8s-subnet,ip-address=192.168.0.5 \
+    --security-group k8s-secgroup
+openstack port create k8s-control-plane-2 \
+    --network k8s-net \
+    --fixed-ip subnet=k8s-subnet,ip-address=192.168.0.6 \
+    --security-group k8s-secgroup
+```
+
+
+### Load balancer and netowrk ports for the control plane
+
+Start by creating a load balancer:
+```bash
+openstack loadbalancer create \
+    --name k8s-control-plane \
+    --vip-port-id k8s-control-plane
+```
+
+We have to wait a minute for the load balancer to be provisioned: behind the
+scenes there is a VM hosting an HA proxy being created for you. In the
+meantime, we can assign a floating IP to the load balance, which will make the
+kubernetes API reachable from the outside:
+```bash
+openstack floating ip create "${EXTERNAL_NETWORK}" \
+    --description "${CLUSTER_NAME}" \
+    --port k8s-control-plane
+```
+Remeber this IP addess of the DNS entry for the control plane configuration
+later. *Note* that the description in the floating IP is optional. If you
+provide it you get a DNS entry for your floating IP as:
+`$CLUSTER_NAME.PROJECT_NAME.hpccloud.mpg.de`
+
+Once the load balancer is up we can continue:
+```bash
+openstack loadbalancer listener create k8s-control-plane \
+    --name k8s-control-plane-listener \
+    --protocol TCP \
+    --protocol-port 6443 \
+    --allowed-cidr 192.168.0.0/24 \
+    --allowed-cidr 130.183.0.0/16 \
+    --allowed-cidr 10.0.0.0/8
+openstack loadbalancer pool create \
+    --name k8s-control-plane-pool \
+    --lb-algorithm ROUND_ROBIN \
+    --listener k8s-control-plane-listener \
+    --protocol TCP
+openstack loadbalancer healthmonitor create k8s-control-plane-pool \
+    --name k8s-control-plane-healthmonitor \
+    --delay 5 \
+    --max-retries 4 \
+    --timeout 10 \
+    --type TCP
+openstack loadbalancer member create k8s-control-plane-pool \
+    --name k8s-control-plane-0 \
+    --address 192.168.0.4 \
+    --protocol-port 6443 
+openstack loadbalancer member create k8s-control-plane-pool \
+    --name k8s-control-plane-1 \
+    --address 192.168.0.5 \
+    --protocol-port 6443 
+openstack loadbalancer member create k8s-control-plane-pool \
+    --name k8s-control-plane-2 \
+    --address 192.168.0.6 \
+    --protocol-port 6443 
+```
+
+
+### Get SSH access to your cluster
+
+#### Using the loadbalancer
+This sets up the loadbalancer to distribute ssh traffic to the control plane
+nodes. Once you are on a control plane node you can reach the rest of the
+kubernetes cluster.
+```bash
+openstack loadbalancer listener create k8s-control-plane \
+    --name ssh-control-plane-listener \
+    --protocol TCP \
+    --protocol-port 22 \
+    --timeout-client-data 0 \
+    --timeout-member-data 0 
+openstack loadbalancer pool create \
+    --name ssh-control-plane-pool \
+    --lb-algorithm ROUND_ROBIN \
+    --listener ssh-control-plane-listener \
+    --protocol TCP \
+    --session-persistence type=SOURCE_IP
+openstack loadbalancer healthmonitor create ssh-control-plane-pool \
+    --name ssh-control-plane-healthmonitor \
+    --delay 5 \
+    --max-retries 4 \
+    --timeout 10 \
+    --type TCP
+openstack loadbalancer member create ssh-control-plane-pool \
+    --name ssh-control-plane-0 \
+    --address 192.168.0.4 \
+    --protocol-port 22
+openstack loadbalancer member create ssh-control-plane-pool \
+    --name ssh-control-plane-1 \
+    --address 192.168.0.5 \
+    --protocol-port 22
+openstack loadbalancer member create ssh-control-plane-pool \
+    --name ssh-control-plane-2 \
+    --address 192.168.0.6 \
+    --protocol-port 22
+```
+
+
+## Bootstrapping kubernetes
+
+
+### The first control plane node
+
+
+#### Instantiation
+
+Edit the script `../common-config.sh`. You will need to set the two variables
+`CONTAINERD_VERSION` and `KUBERNETES_VERSION`, which are provided by HEAT when
+using the template method. Then use the script as user data for the controller
+and worker nodes you create:
+```bash
+openstack server create k8s-control-plane-0 \
+    --image "Debian 11" \
+    --flavor $CONTROL_PLANE_FLAVOR \
+    --port k8s-control-plane-0 \
+    --security-group k8s-secgroup \
+    --key-name $KEYNAME \
+    --user-data ../common-config.sh
+```
+
+Wait a moment for the node to come up and run the base configuration script.
+
+
+#### Configuration
+
+Use the provided `kubeadm.yaml` and fill in you *cluster name* and the *floating
+IP or DNS name of the load balancer*. Copy the resulting configuration to the
+control plane using scp.
+
+The ssh into the control plane and initialize kubernetes. *Note:* that you will
+land on a node based on your source IP. If you want to start configuring another
+control plane node first, scp/ssh over to that one.
+```bash
+ssh ${CONTROL_PLANE_IP} -l root
+kubeadm init --upload-certs --config kubeadm.yaml
+```
+
+You will be given a command to add control plane nodes to the cluster:
+
+```bash
+kubeadm join $CLUSTER_NAME:6443 --token ___ \
+	--discovery-token-ca-cert-hash ___ \
+	--control-plane --certificate-key ___
+```
+
+And another to add workers to the cluster:
+```bash
+kubeadm join $CLUSTER_NAME:6443 --token ___ \
+	--discovery-token-ca-cert-hash ___
+```
+
+You can now also grab the kubectl admin configuration file from
+`/etc/kubernetes/admin.conf`. Set `kubectl` up to use it, more information is at
+the [kubernetes docs](https://kubernetes.io/docs/tasks/tools/).
+
+
+### The rest of the control plane
+Make a copy of the `../common-config.sh`, calling it, for example,
+`control-plane-config.sh`. Add the command to join the control plane at the
+bottom. Use this new file as the configuration for the new control plae nodes:
+
+```bash
+openstack server create k8s-control-plane-1 \
+    --image "Debian 11" \
+    --flavor $CONTROL_PLANE_FLAVOR \
+    --port k8s-control-plane-1 \
+    --security-group k8s-secgroup \
+    --key-name $KEYNAME \
+    --user-data control-plane-config.sh
+openstack server create k8s-control-plane-2 \
+    --image "Debian 11" \
+    --flavor $CONTROL_PLANE_FLAVOR \
+    --port k8s-control-plane-2 \
+    --security-group k8s-secgroup \
+    --key-name $KEYNAME \
+    --user-data control-plane-config.sh
+```
+
+It will take a few minutes for the machines to become available. You will see
+them come up with:
+
+```bash
+kubectl get nodes
+```
+
+The output should look something like this:
+```
+NAME                  STATUS   ROLES           AGE     VERSION
+k8s-control-plane-0   Ready    control-plane   14m     v1.26.6
+k8s-control-plane-1   Ready    control-plane   4m9s    v1.26.6
+k8s-control-plane-2   Ready    control-plane   3m17s   v1.26.6
+```
+
+### Adding worker nodes
+Now it is time to add some worker nodes. Make a second copy of
+`../common-config.sh`, calling it, for example, `worker-config.sh`. Add the
+command to join worker nodes to the cluster at the bottom. Use this new file
+as the user data for the worker nodes:
+
+```bash
+openstack server create k8s-worker \
+    --image "Debian 11" \
+    --flavor $WORKER_FLAVOR \
+    --network k8s-net \
+    --security-group k8s-secgroup \
+    --key-name $KEYNAME \
+    --user-data worker-config.sh \
+    --min $NUM_WORKERS \
+    --max $NUM_WORKERS
+```
+
+Once again it will take a few minutes for the machines to come up and appear in
+the cluster. You will be able to see them with:
+```bash
+kubectl get nodes
+```
+
+The output should look something like this:
+```
+NAME                  STATUS   ROLES           AGE     VERSION
+k8s-control-plane-0   Ready    control-plane   32m   v1.26.6
+k8s-control-plane-1   Ready    control-plane   21m   v1.26.6
+k8s-control-plane-2   Ready    control-plane   20m   v1.26.6
+k8s-worker-1          Ready    <none>          22s   v1.26.6
+k8s-worker-2          Ready    <none>          23s   v1.26.6
+k8s-worker-3          Ready    <none>          19s   v1.26.6
+```
+
+# Configuring the kubernetes
+## Container networking
+We have tested using an overlay network (VXLAN). If you have high network
+demands you may want to look into a more performant solution:
+```bash
+curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico-vxlan.yaml \
+    | sed 's/CrossSubnet/Always/' \
+    | kubectl apply -f -
+```
+
+### Plug Kubernetes into OpenStack
+You can use the OpenStack cloud provider
+[integration](https://github.com/kubernetes/cloud-provider-openstack) to
+provision persistent storage and load balancers for external access to your
+kubernetes. You will need to create an openstack application credential for
+kubernetes:
+```bash
+openstack application credential create k8s-cloud-provider
+```
+You will also need the subnet your workers reside on:
+```bash
+openstack subnet show k8s-subnet -f value -c id
+```
+and the identity of the external network you are using:
+```bash
+openstack network show $EXTERNAL_NETWORK -f value -c id
+```
+
+Fill the application credentials and network IDs into the provided `cloud.conf`,
+the make it avaialble on your kubernetes as a secret:
+```bash
+kubectl create secret -n kube-system generic cloud-config --from-file=cloud.conf
+```
+
+We use the release of the
+[cloud-provider-openstack](https://github.com/kubernetes/cloud-provider-openstack/)
+corresponding to the kubernetes version:
+```bash
+git clone \
+    --single-branch \
+    --no-tags \
+    --depth 1 \
+    -b "release-$KUBERNETES_VERSION" \
+    https://github.com/kubernetes/cloud-provider-openstack.git
+rm cloud-provider-openstack/manifests/cinder-csi-plugin/csi-secret-cinderplugin.yaml
+
+kubectl apply -f cloud-provider-openstack/manifests/controller-manager/cloud-controller-manager-roles.yaml
+kubectl apply -f cloud-provider-openstack/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
+kubectl apply -f cloud-provider-openstack/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml
+kubectl apply -f cloud-provider-openstack/manifests/cinder-csi-plugin
+kubectl apply -f cinder.yaml
+```
diff --git a/step-by-step/kubeadm.yaml b/step-by-step/kubeadm.yaml
index 9ba5babf4d07c363b4d76bf8937f288de3ef8085..593fa7f75a1071a689b024890748fcb0ebf41458 100644
--- a/step-by-step/kubeadm.yaml
+++ b/step-by-step/kubeadm.yaml
@@ -2,19 +2,21 @@ apiVersion: kubelet.config.k8s.io/v1beta1
 kind: KubeletConfiguration
 cgroupDriver: systemd
 ---
-apiVersion: kubeadm.k8s.io/v1beta2
+apiVersion: kubeadm.k8s.io/v1beta3
 kind: InitConfiguration
 nodeRegistration:
   kubeletExtraArgs:
     cloud-provider: "external"
 ---
-apiVersion: kubeadm.k8s.io/v1beta2
+apiVersion: kubeadm.k8s.io/v1beta3
 kind: ClusterConfiguration
+clusterName: "$CLUSTER_NAME"
 apiServer:
   extraArgs:
     cloud-provider: "external"
-#  certSANs:
-#    - "(controller floating ip)"
+  certSANs:
+    - "$CONTROL_PLANE_IP"
+controlPlaneEndpoint: "$CONTROL_PLANE_IP:6443"
 controllerManager:
   extraArgs:
     cloud-provider: "external"
diff --git a/step-by-step/setup.sh b/step-by-step/setup.sh
deleted file mode 100644
index a490afd6199be13f7bdeeab12b7a1c9982126084..0000000000000000000000000000000000000000
--- a/step-by-step/setup.sh
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/bin/bash
-
-apt-get purge -y snapd ufw ubuntu-advantage-tools
-
-curl https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
-echo "deb [signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu jammy stable" > /etc/apt/sources.list.d/docker.list
-apt-get update
-apt-get install -y containerd.io docker-ce docker-ce-cli
-apt-mark hold containerd.io docker-ce docker-ce-cli
-
-cat > /etc/docker/daemon.json <<EOF
-{
-  "exec-opts": ["native.cgroupdriver=systemd"],
-  "log-driver": "json-file",
-  "log-opts": {
-    "max-size": "100m"
-  },
-  "storage-driver": "overlay2"
-}
-EOF
-systemctl restart docker
-
-curl https://packages.cloud.google.com/apt/doc/apt-key.gpg -o /etc/apt/keyrings/kubernetes.gpg
-echo "deb [signed-by=/etc/apt/keyrings/kubernetes.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
-apt-get update
-apt-get install -y kubelet='1.23.*' kubeadm='1.23.*' kubectl='1.23.*'
-apt-mark hold kubelet kubeadm kubectl