Here is a diagram representing how MetalK8s orchestrates deployment on a set of machines:
The intent is for this installer to deploy a system which looks exactly like one deployed using
kubeadm, i.e. using the same (or at least highly similar) static manifests, cluster
ConfigMaps, RBAC roles and bindings, …
The rationale: at some point in time, once
kubeadm gets easier to embed in
larger deployment mechanisms, we want to be able to switch over without too
kubeadm applies best-practices so why not follow them anyway.
To launch the bootstrap process, some input from the end-user is required, which can vary from one installation to another:
x.y.z.w/n) of the control plane networks to use
Given these CIDR, we can find the address on which to bind services like
These should be existing networks in the infrastructure to which all hosts are connected.
This is a list of CIDRs, which will be tried one after another, to find a matching local interface (i.e. hosts comprising the cluster may reside in different subnets, e.g. control plane in VMs, workload plane on physical infrastructure).
x.y.z.w/n) of the workload plane networks to use
Given these CIDRs, we can find the address to be used by the CNI overlay network (i.e. Calico) for inter-
This can be the same as the control plane network.
x.y.z.w/n) of the
Used to configure the Calico
IPPool. This must be a non-existing network in the infrastructure.
x.y.z.w/n) of the
VIP for the
Used as the address of
kube-apiserverwhere required. This can either be a VIP managed by custom load-balancing/high-availability infrastructure, in which case the
keepalivedtoggle must be off, or one which our platform will manage using
keepalivedis enabled, this VIP must sit in a control plane CIDR shared by all control plane nodes.
Note: we run
keepalivedin unicast mode, which is an extension of classic VRRP, but removes the need for multicast support on the network.
We assume a host-based firewall is used, based on
firewalld. As such, for
any service we deploy which must be accessible from the outside, we must set up
an appropriate rule.
We assume SSH access is not blocked by the host-based firewall.
These services include:
keepalivedis enabled on control-plane nodes
HTTPS on the bootstrap node, for
nginxfronting the OCI registry and serving the yum repository
salt-masteron the bootstrap node
etcdon control-plane / etcd nodes
kube-apiserveron control-plane nodes
kubeleton all cluster nodes