Cluster and Services Configurations¶
This section contains information describing the list of available Cluster and Services Configurations including procedures for customizing and applying any given Cluster and Services Configurations.
Default Service Configurations¶
MetalK8s addons (Alertmanager, Dex, Grafana, Prometheus and UI) ships with default runtime service configurations required for basic service deployment. Find below an exhaustive list of available default Service Configurations deployed in a MetalK8s cluster.
Alertmanager Default Configuration¶
Alertmanager handles alerts sent by Prometheus. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email, PagerDuty, or OpsGenie. It also takes care of silencing and inhibition of alerts.
The default configuration values for Alertmanager are specified below:
# Configuration of the Alertmanager service
apiVersion: addons.metalk8s.scality.com
kind: AlertmanagerConfig
spec:
# Configure the Alertmanager Deployment
deployment:
replicas: 1
notification:
config:
global:
resolve_timeout: 5m
inhibit_rules:
- source_matchers:
- 'severity = critical'
target_matchers:
- 'severity =~ warning|info'
equal:
- 'namespace'
- 'alertname'
- source_matchers:
- 'severity = warning'
target_matchers:
- 'severity = info'
equal:
- 'namespace'
- 'alertname'
- source_matchers:
- 'alertname = InfoInhibitor'
target_matchers:
- 'severity = info'
equal:
- 'namespace'
route:
group_by: ['namespace']
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
receiver: 'null'
routes:
- receiver: 'metalk8s-alert-logger'
continue: True
- receiver: 'null'
matchers:
- alertname =~ "InfoInhibitor|Watchdog"
receivers:
- name: 'null'
- name: 'metalk8s-alert-logger'
webhook_configs:
- send_resolved: True
url: 'http://metalk8s-alert-logger:19094/'
templates:
- '/etc/alertmanager/config/*.tmpl'
See Alertmanager Configuration Customization to override these defaults.
Dex Default Configuration¶
Dex is an Identity Provider that drives user authentication and identity management in a MetalK8s cluster.
The default configuration values for Dex are specified below:
# Defaults for configuration of Dex (OIDC)
apiVersion: addons.metalk8s.scality.com/v1alpha2
kind: DexConfig
spec:
# Deployment configuration
deployment:
replicas: 2
affinity:
podAntiAffinity:
soft:
- topologyKey: kubernetes.io/hostname
# - topologyKey: my.second.important/topologyKey
# weight: 42
# hard:
# - topologyKey: kubernetes.io/hostname
# Dex server configuration
config:
issuer: {{ control_plane_ingress_ep }}/oidc
storage:
config:
inCluster: true
type: kubernetes
logger:
level: debug
https: 0.0.0.0:5554
tlsCert: /etc/dex/tls/https/server/tls.crt
tlsKey: /etc/dex/tls/https/server/tls.key
frontend:
dir: /srv/dex/web/
theme: scality
issuer: MetalK8s
connectors: []
oauth2:
alwaysShowLoginScreen: true
skipApprovalScreen: true
responseTypes: ["code", "token", "id_token"]
expiry:
signingKeys: "6h"
idTokens: "24h"
{#- FIXME: client secrets shouldn't be hardcoded #}
{#- TODO: allow overriding these predefined clients #}
staticClients:
- id: oidc-auth-client
name: oidc-auth-client
redirectURIs:
- urn:ietf:wg:oauth:2.0:oob
secret: lkfa9jaf3kfakqyeoikfjakf93k2l
trustedPeers:
- metalk8s-ui
- grafana-ui
- id: metalk8s-ui
name: MetalK8s UI
redirectURIs:
- {{ control_plane_ingress_ep }}/{{ metalk8s_ui_config.spec.basePath.lstrip('/') }}
secret: ybrMJpVMQxsiZw26MhJzCjA2ut
- id: grafana-ui
name: Grafana UI
redirectURIs:
- {{ control_plane_ingress_ep }}/grafana/login/generic_oauth
secret: 4lqK98NcsWG5qBRHJUqYM1
enablePasswordDB: true
staticPasswords: []
See Dex Configuration Customization for Dex configuration customizations.
Grafana Default Configuration¶
Grafana is a web interface used to visualize and analyze metrics scraped by Prometheus, with nice graphs.
The default configuration values for Grafana are specified below:
# Configuration of the Grafana service
apiVersion: addons.metalk8s.scality.com
kind: GrafanaConfig
spec:
# Configure the Grafana Deployment
deployment:
replicas: 1
config:
grafana.ini:
analytics:
check_for_updates: false
reporting_enabled: false
paths:
data: /var/lib/grafana/
logs: /var/log/grafana
plugins: /var/lib/grafana/plugins
provisioning: /etc/grafana/provisioning
log:
mode: console
server:
root_url: "{{ control_plane_ingress_endpoint }}/grafana"
auth:
disable_login_form: true
oauth_auto_login: true
auth.generic_oauth:
api_url: "{{ control_plane_ingress_endpoint }}/oidc/userinfo"
auth_url: "{{ control_plane_ingress_endpoint }}/oidc/auth"
client_id: grafana-ui
client_secret: 4lqK98NcsWG5qBRHJUqYM1
enabled: true
role_attribute_path: contains(`{{ dex.spec.config.staticPasswords | map(attribute='email') | list | tojson }}`, email) && 'Admin'
scopes: openid profile email groups
tls_skip_verify_insecure: true
token_url: "{{ control_plane_ingress_endpoint }}/oidc/token"
Prometheus Default Configuration¶
Prometheus is responsible for monitoring all the applications and systems in the MetalK8s cluster. It scrapes and stores various metrics from these systems and then analyze them against a set of alerting rules. If a rule matches, Prometheus sends an alert to Alertmanager.
The default configuration values for Prometheus are specified below:
# Configuration of the Prometheus service
apiVersion: addons.metalk8s.scality.com
kind: PrometheusConfig
spec:
# Configure the Prometheus Deployment
deployment:
replicas: 1
config:
retention_time: "10d"
retention_size: "0" # "0" to disable size-based retention
enable_admin_api: false
serviceMonitor:
kubelet:
scrapeTimeout: 10s
rules:
kube_apps:
kube_job_not_completed:
warning:
hours: 24 # Hours of job active before we trigger alert
node_exporter:
node_filesystem_space_filling_up:
warning:
hours: 24 # Hours before there is no space left
threshold: 40 # Min space left to trigger prediction
critical:
hours: 4
threshold: 20
node_filesystem_almost_out_of_space:
warning:
available: 20 # Percentage of free space left
critical:
available: 12
node_filesystem_files_filling_up:
warning:
hours: 24 # Hours before there is no inode left
threshold: 40 # Min space left to trigger prediction
critical:
hours: 4
threshold: 20
node_filesystem_almost_out_of_files:
warning:
available: 15 # Percentage of free inodes left
critical:
available: 8
node_network_receive_errors:
warning:
error_rate: 0.01 # Rate of receive errors for the last 2m
node_network_transmit_errors:
warning:
error_rate: 0.01 # Rate of transmit errors for the last 2m
node_high_number_conntrack_entries_used:
warning:
threshold: 0.75
node_clock_skew_detected:
warning:
threshold:
high: 0.05
low: -0.05
node_clock_not_synchronising:
warning:
threshold: 0
node_raid_degraded:
critical:
threshold: 1
node_raid_disk_failure:
warning:
threshold: 1
Loki Default Configuration¶
Loki is a log aggregation system, its job is to receive logs from collectors (fluent-bit), store them on persistent storage, then make them queryable through its API.
The default configuration values for Loki are specified below:
# Configuration of the Loki service
apiVersion: addons.metalk8s.scality.com
kind: LokiConfig
spec:
deployment:
replicas: 1
resources:
requests:
memory: "256Mi"
config:
auth_enabled: false
chunk_store_config:
max_look_back_period: 0s
memberlist:
abort_if_cluster_join_fails: false
join_members:
- loki-memberlist
dead_node_reclaim_time: 30s
gossip_to_dead_nodes_time: 15s
left_ingesters_timeout: 30s
bind_addr: ["0.0.0.0"]
bind_port: 7946
ingester:
chunk_block_size: 262144
chunk_idle_period: 3m
chunk_retain_period: 1m
wal:
dir: /var/loki/loki/wal
lifecycler:
ring:
kvstore:
store: memberlist
max_transfer_retries: 0
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
schema_config:
configs:
- from: 2018-04-15
index:
period: 168h
prefix: index_
object_store: filesystem
schema: v9
store: boltdb
server:
http_listen_port: 3100
storage_config:
boltdb:
directory: /var/loki/loki/index
filesystem:
directory: /var/loki/loki/chunks
table_manager:
retention_deletes_enabled: true
retention_period: 336h
Fluent-bit Default Configuration¶
Fluent-bit is a logs collectors system, its job is to retrieve local logs to forward them to a log aggregation system (loki).
The default configuration values for Fluent-bit are specified below:
# Configuration of the fluent-bit service
apiVersion: addons.metalk8s.scality.com
kind: FluentBitConfig
spec:
deployment:
resources:
requests:
cpu: 100m
memory: 200Mi
limits:
memory: 1Gi
config:
output:
- Name: loki
Match: kube.*
Host: loki
Port: 3100
Tenant_ID: '""'
Labels: job=fluent-bit
Label_Keys: "$container, $node, $namespace, $pod, $app, $release, $stream"
Auto_Kubernetes_Labels: false
Line_Format: json
Log_Level: warn
Workers: 4
- Name: loki
Match: host.*
Host: loki
Port: 3100
Tenant_ID: '""'
Labels: "job=fluent-bit"
Label_Keys: "$hostname, $unit"
Line_Format: json
Log_Level: warn
Workers: 4
UI Default Configuration¶
MetalK8s UI simplifies management and monitoring of a MetalK8s cluster from a centralized user interface.
The default configuration values for MetalK8s UI are specified below:
# Defaults for configuration of MetalK8s UI
apiVersion: addons.metalk8s.scality.com/v1alpha2
kind: UIConfig
spec:
# Deployment configuration
deployment:
replicas: 2
affinity:
podAntiAffinity:
soft:
- topologyKey: kubernetes.io/hostname
# - topologyKey: my.second.important/topologyKey
# weight: 42
# hard:
# - topologyKey: kubernetes.io/hostname
# Authentication configuration
auth:
kind: "OIDC"
providerUrl: "/oidc"
redirectUrl: "{{ salt.metalk8s_network.get_control_plane_ingress_endpoint() }}/"
clientId: "metalk8s-ui"
responseType: "id_token"
scopes: "openid profile email groups offline_access audience:server:client_id:oidc-auth-client"
# UI configuration
title: Metalk8s Platform
basePath: /
See Metalk8s UI Configuration Customization to override these defaults.
Shell UI Default Configuration¶
MetalK8s Shell UI provides a common set of features to MetalK8s UI and any other UI (both control and workload plane) configured to include the Shell UI component(s). Features exposed include: - user authentication using an OIDC provider - navigation menu items, displayed according to user groups (retrieved from OIDC)
The default Shell UI configuration values are specified below:
{%- if pillar.addons.dex.enabled %}
{%- set dex_defaults = salt.slsutil.renderer('salt://metalk8s/addons/dex/config/dex.yaml.j2', saltenv=saltenv) %}
{%- set dex = salt.metalk8s_service_configuration.get_service_conf('metalk8s-auth', 'metalk8s-dex-config', dex_defaults) %}
{%- endif %}
{%- set metalk8s_ui_defaults = salt.slsutil.renderer(
'salt://metalk8s/addons/ui/config/metalk8s-ui-config.yaml.j2', saltenv=saltenv
)
%}
{%- set metalk8s_ui_config = salt.metalk8s_service_configuration.get_service_conf(
'metalk8s-ui', 'metalk8s-ui-config', metalk8s_ui_defaults
)
%}
# Defaults for shell UI configuration
apiVersion: addons.metalk8s.scality.com/v1alpha2
kind: ShellUIConfig
spec:
{%- if pillar.addons.dex.enabled %}
oidc:
providerUrl: "/oidc"
redirectUrl: "{{ salt.metalk8s_network.get_control_plane_ingress_endpoint() }}/{{ metalk8s_ui_config.spec.basePath.lstrip('/') }}"
clientId: "metalk8s-ui"
responseType: "id_token"
scopes: "openid profile email groups offline_access audience:server:client_id:oidc-auth-client"
userGroupsMapping:
{%- for user in dex.spec.config.staticPasswords | map(attribute='email') %}
"{{ user }}": [metalk8s:admin]
{%- endfor %}
{%- endif %}
discoveryUrl: "/shell/deployed-ui-apps.json"
logo:
light: /brand/assets/logo-light.svg
dark: /brand/assets/logo-dark.svg
darkRebrand: /brand/assets/logo-darkRebrand.svg
favicon: /brand/favicon-metalk8s.svg
canChangeLanguage: false
canChangeTheme: true
canChangeInstanceName: false
themes:
dark:
type: "core-ui"
name: "darkRebrand"
logoPath: "/brand/assets/logo-dark.svg"
light:
type: "core-ui"
name: "artescaLight"
logoPath: "/brand/assets/logo-light.svg"
See MetalK8s Shell UI Configuration Customization to override these defaults.
Shell UI Workload Plane Default Configuration¶
Shell UI has a different configuration for the workload plane.
The default Shell UI workload plane configuration values are specified below:
# Defaults for configuration of Shell-UI Workload Plane
apiVersion: addons.metalk8s.scality.com/v1alpha1
kind: WorkloadPlaneShellUIConfig
spec:
deployedApps: []
config:
discoveryUrl: /shell/deployed-ui-apps.json
logo:
light: /brand/assets/logo-light.svg
dark: /brand/assets/logo-dark.svg
darkRebrand: /brand/assets/logo-darkRebrand.svg
favicon: /brand/favicon-metalk8s.svg
canChangeLanguage: false
canChangeTheme: false
canChangeInstanceName: false
productName: MetalK8s
themes:
darkRebrand:
logoPath: /brand/assets/logo-darkRebrand.svg
Service Configurations Customization¶
Workload plane Ingress Controller Configuration Customization¶
Default configuration for Workload plane Ingress Controller can be overridden
by editing its Cluster and Service ConfigMap
metalk8s-ingress-controller-config
in namespace metalk8s-ingress
under the key data.config\.yaml
:
root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \ edit configmap -n metalk8s-ingress \ metalk8s-ingress-controller-config
The following documentation is not exhaustive and is just here to give some hints on basic usage, for more details or advanced configuration, see the official Nginx Ingress Controller documentation.
Disable HTTP2¶
HTTP2 can be disabled by setting use-http2
to false
:
apiVersion: v1 kind: ConfigMap data: config.yaml: |- apiVersion: addons.metalk8s.scality.com/v1alpha2 kind: IngressControllerConfig spec: config: use-http2: "false"
Applying configuration¶
Any changes made to metalk8s-ingress-controller-config
ConfigMap must
then be applied with Salt.
root@bootstrap $ kubectl exec --kubeconfig /etc/kubernetes/admin.conf \ -n kube-system -c salt-master salt-master-bootstrap -- \ salt-run state.sls \ metalk8s.addons.nginx-ingress.deployed \ saltenv=metalk8s-127.0.4-dev
Alertmanager Configuration Customization¶
Default configuration for Alertmanager can be overridden by editing its
Cluster and Service ConfigMap metalk8s-alertmanager-config
in namespace
metalk8s-monitoring
under the key data.config\.yaml
:
root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \ edit configmap -n metalk8s-monitoring \ metalk8s-alertmanager-config
The following documentation is not exhaustive and is just here to give some hints on basic usage, for more details or advanced configuration, see the official Alertmanager documentation.
Adding inhibition rule for an alert¶
Alert inhibition rules allow making one alert inhibit notifications for some other alerts.
For example, inhibiting alerts with a warning
severity when there is the
same alert with a critical
severity.
apiVersion: v1 kind: ConfigMap data: config.yaml: |- apiVersion: addons.metalk8s.scality.com kind: AlertmanagerConfig spec: notification: config: inhibit_rules: - source_match: severity: critical target_match: severity: warning equal: - alertname
Adding receivers¶
Receivers allow configuring where the alert notifications are sent.
Here is a simple Slack receiver which makes Alertmanager send all notifications to a specific Slack channel.
apiVersion: v1 kind: ConfigMap data: config.yaml: |- apiVersion: addons.metalk8s.scality.com kind: AlertmanagerConfig spec: notification: config: global: slack_api_url: https://hooks.slack.com/services/ABCDEFGHIJK route: receiver: slack-receiver receivers: - name: slack-receiver slack_configs: - channel: '#<your-channel>' send_resolved: true
You can find documentation
here
to activate incoming webhooks for your Slack workspace and retrieve the
slack_api_url
value.
Another example, with email receiver.
apiVersion: v1 kind: ConfigMap data: config.yaml: |- apiVersion: addons.metalk8s.scality.com kind: AlertmanagerConfig spec: notification: config: route: receiver: email-receiver receivers: - name: email-receiver email_configs: - to: <your-address>@<your-domain.tld> from: alertmanager@<your-domain.tld> smarthost: <smtp.your-domain.tld>:587 auth_username: alertmanager@<your-domain.tld> auth_identity: alertmanager@<your-domain.tld> auth_password: <password> send_resolved: true
There are more receivers available (PagerDuty, OpsGenie, HipChat, …).
Applying configuration¶
Any changes made to metalk8s-alertmanager-config
ConfigMap must then be
applied with Salt.
root@bootstrap $ kubectl exec --kubeconfig /etc/kubernetes/admin.conf \ -n kube-system -c salt-master salt-master-bootstrap -- \ salt-run state.sls \ metalk8s.addons.prometheus-operator.deployed \ saltenv=metalk8s-127.0.4-dev
Grafana Configuration Customization¶
Add Extra Dashboard¶
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
grafana_dashboard: '1'
name: <grafana-dashboard-name>
namespace: metalk8s-monitoring
data:
<dashboard-filename>.json: |-
<dashboard-definition>
Note
The ConfigMap must be deployed in metalk8s-monitoring namespace and the grafana_dashboard: ‘1’ label in the example above is mandatory for the dashboard to be taken into account.
Then this manifest must be applied.
root@bootstrap $ kubectl --kubeconfig=/etc/kubernetes/admin.conf \
apply -f <path-to-the-manifest>
Prometheus Configuration Customization¶
Default configuration for Prometheus can be overridden by editing its
Cluster and Service ConfigMap metalk8s-prometheus-config
in namespace
metalk8s-monitoring
under the key data.config.yaml
:
root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \
edit configmap -n metalk8s-monitoring \
metalk8s-prometheus-config
Change Retention Time¶
Prometheus is deployed with a retention based on time (10d). This value can be overriden:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metalk8s-prometheus-config
namespace: metalk8s-monitoring
data:
config.yaml: |-
apiVersion: addons.metalk8s.scality.com
kind: PrometheusConfig
spec:
config:
retention_time: 30d
Note
Supported time units are y, w, d, h, m s and ms (years, weeks, days, hours, minutes, seconds and milliseconds).
Then apply the configuration.
Set Retention Size¶
Prometheus is deployed with the size-based retention disabled. This functionality can be actived:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metalk8s-prometheus-config
namespace: metalk8s-monitoring
data:
config.yaml: |-
apiVersion: addons.metalk8s.scality.com
kind: PrometheusConfig
spec:
config:
retention_size: 10GB
Note
Supported size units are B, KB, MB, GB, TB and PB.
Warning
Prometheus does not take the write-ahead log (WAL) size in account to calculate the retention, so the actual disk consumption can be greater than retention_size. You should at least add a 10% margin to be safe. (i.e.: set retention_size to 9GB for a 10GB volume)
Both size and time based retentions can be activated at the same time.
Then apply the configuration.
Set Kubelet metrics scrape tiemout¶
In some cases (e.g. when using a lot of sparse loop devices), the kubelet metrics endpoint can be very slow to answer and the Prometheus’ default 10s scrape timeout may not be sufficient. To avoid timeouts and thus losing metrics, you can customize the scrape timeout as follows:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metalk8s-prometheus-config
namespace: metalk8s-monitoring
data:
config.yaml: |-
apiVersion: addons.metalk8s.scality.com
kind: PrometheusConfig
spec:
config:
serviceMonitor:
kubelet:
scrapeTimeout: 30s
Then apply the configuration.
Predefined Alert Rules Customization¶
A subset of the predefined Alert rules can be customized, the exhaustive list can be found here.
For example, to change the threshold for the disk space alert (% of free space left) from 5% to 10%, simply do:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metalk8s-prometheus-config
namespace: metalk8s-monitoring
data:
config.yaml: |-
apiVersion: addons.metalk8s.scality.com
kind: PrometheusConfig
spec:
rules:
node_exporter:
node_filesystem_almost_out_of_space:
warning:
available: 10
Then apply the configuration.
Enable Prometheus Admin API¶
For security reasons, Prometheus Admin API is disabled by default. It can be enabled with the following:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metalk8s-prometheus-config
namespace: metalk8s-monitoring
data:
config.yaml: |-
apiVersion: addons.metalk8s.scality.com
kind: PrometheusConfig
spec:
config:
enable_admin_api: true
Then apply the configuration.
Adding New Rules¶
Alerting rules allow defining alert conditions based on PromQL
expressions and to send notifications about these alerts to Alertmanager.
In order to add Alert rules, a new PrometheusRule
manifest must be created.
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
metalk8s.scality.com/monitor: ''
name: <prometheus-rule-name>
namespace: <namespace-name>
spec:
groups:
- name: <rules-group-name>
rules:
- alert: <alert-rule-name>
annotations:
description: "some description"
summary: "alert summary"
expr: <PromQL-expression>
for: 1h
labels:
severity: warning
Note
The metalk8s.scality.com/monitor: ‘’ label in the example above is mandatory for Prometheus to take the new rules into account.
Then this manifest must be applied.
root@bootstrap $ kubectl --kubeconfig=/etc/kubernetes/admin.conf \
apply -f <path-to-the-manifest>
For more details on Alert Rules, see the official Prometheus alerting rules documentation
Adding New Service to Monitor¶
To tell monitor to scrape metrics for a Pod, a new ServiceMonitor
manifest
must be created.
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
metalk8s.scality.com/monitor: ''
name: <service-monitor-name>
namespace: <namespace-name>
spec:
endpoints:
- port: <port-name>
namespaceSelector:
matchNames:
- <namespace-name>
selector:
matchLabels:
app.kubernetes.io/name: <app-name>
Note
The metalk8s.scality.com/monitor: ‘’ label in the example above is mandatory for Prometheus to take the new service to monitor into account.
Then this manifest must be applied.
root@bootstrap $ kubectl --kubeconfig=/etc/kubernetes/admin.conf \
apply -f <path-to-the-manifest>
For details and an example, see the Prometheus Operator documentation.
Applying configuration¶
Any changes made to metalk8s-prometheus-config
ConfigMap must then be
applied with Salt.
root@bootstrap $ kubectl exec --kubeconfig /etc/kubernetes/admin.conf \ -n kube-system -c salt-master salt-master-bootstrap -- \ salt-run state.sls \ metalk8s.addons.prometheus-operator.deployed \ saltenv=metalk8s-127.0.4-dev
Dex Configuration Customization¶
Enable or Disable the Static User Store¶
Dex includes a local store of users and their passwords, which is enabled by default.
Important
To continue using MetalK8s OIDC (especially for MetalK8s UI and Grafana) in case of the loss of external identity providers, it is advised to keep the static user store enabled.
To disable (resp. enable) it, perform the following steps:
Set the
enablePasswordDB
configuration flag tofalse
(resp.true
):root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \ edit configmap metalk8s-dex-config -n metalk8s-auth
# [...] data: config.yaml: |- apiVersion: addons.metalk8s.scality.com/v1alpha2 kind: DexConfiguration spec: # [...] config: # [...] enablePasswordDB: false # or true
Apply your changes:
root@bootstrap $ kubectl exec -n kube-system -c salt-master \ --kubeconfig /etc/kubernetes/admin.conf \ salt-master-bootstrap -- salt-run state.sls \ metalk8s.addons.dex.deployed saltenv=metalk8s-127.0.4-dev
Note
Dex enables other operations on static users, such as Adding a Static User, and Changing a Static User Password.
Additional Configurations¶
All configuration options exposed by Dex can be changed by following a similar procedure to the ones documented above. Refer to Dex documentation for an exhaustive explanation of what is supported.
To define (or override) any configuration option, follow these steps:
Add (or change) the corresponding field under the
spec.config
key of the metalk8s-auth/metalk8s-dex-config ConfigMap:root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \ edit configmap metalk8s-dex-config -n metalk8s-auth
For example, registering a client application with Dex can be done by adding a new entry under
staticClients
:# [...] data: config.yaml: |- apiVersion: addons.metalk8s.scality.com/v1alpha2 kind: DexConfiguration spec: # [...] config: # [...] staticClients: - id: example-app secret: example-app-secret name: 'Example App' # Where the app will be running. redirectURIs: - 'http://127.0.0.1:5555/callback'
Apply your changes by running:
root@bootstrap $ kubectl exec -n kube-system -c salt-master \ --kubeconfig /etc/kubernetes/admin.conf \ salt-master-bootstrap -- salt-run state.sls \ metalk8s.addons.dex.deployed saltenv=metalk8s-127.0.4-dev
Todo
Add documentation for the following:
External authentication (#2013)
Configuring LDAP
Configuring Active Directory (AD)
Loki Configuration Customization¶
Default configuration for Loki can be overridden by editing its
Cluster and Service ConfigMap metalk8s-loki-config
in namespace
metalk8s-logging
under the key data.config.yaml
:
root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \
edit configmap -n metalk8s-logging \
metalk8s-loki-config
The following documentation is not exhaustive and is just here to give some hints on basic usage, for more details or advanced configuration, see the official Loki documentation.
Add Loki memory limit¶
Loki consumes some memory to store chunks before they get written to disks. Its memory consumption really depends on the usage, which is why we do not set any limit by default.
However, if Loki is unable to write to the disk for any reason, it will continue keeping logs in memory, leading to large memory consumption until the issue is resolved. To prevent Loki from taking too much from the host, potentially leading to starvation, you can define a resource limit on the Pod.
For example, to set the limit to 4 GiB, the ConfigMap must be edited as follows:
apiVersion: v1
kind: ConfigMap
data:
config.yaml: |-
apiVersion: addons.metalk8s.scality.com
kind: LokiConfig
spec:
deployment:
resources:
limits:
memory: "4Gi"
Changing the logs retention period¶
Retention period is the time the logs will be stored and available before getting purged.
For example, to set the retention period to 1 week, the ConfigMap must be edited as follows:
apiVersion: v1
kind: ConfigMap
data:
config.yaml: |-
apiVersion: addons.metalk8s.scality.com
kind: LokiConfig
spec:
config:
table_manager:
retention_period: 168h
Note
Due to internal implementation, retention_period
must be a multiple of
24h
in order to get the expected behavior
Fluent-bit Configuration Customization¶
Add Fluent-bit memory limit¶
Fluent-bit consumes some memory to store logs input before processing them and logs chunks before sending them to Loki. Its memory consumption really depends on the usage, which is why you may want to change it.
For example, to set the limit to 4 GiB, the ConfigMap must be edited as follows:
apiVersion: v1
kind: ConfigMap
data:
config.yaml: |-
apiVersion: addons.metalk8s.scality.com
kind: FluentBitConfig
spec:
deployment:
resources:
limits:
memory: "4Gi"
Metalk8s UI Configuration Customization¶
Default configuration for MetalK8s UI can be overridden by editing its
Cluster and Service ConfigMap metalk8s-ui-config
in namespace
metalk8s-ui
under the key data.config\.yaml
:
root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \ edit configmap -n metalk8s-ui \ metalk8s-ui-config
Changing the MetalK8s UI Ingress Path¶
In order to expose another UI at the root path of the control plane, in place of MetalK8s UI, you need to change the Ingress path from which MetalK8s UI is served.
For example, to serve MetalK8s UI at /platform instead of /, follow these steps:
Change the value of
spec.basePath
in the ConfigMap:
apiVersion: v1
kind: ConfigMap
data:
config.yaml: |-
apiVersion: addons.metalk8s.scality.com/v1alpha1
kind: UIConfig
spec:
basePath: /platform
Apply your changes by running:
root@bootstrap $ kubectl exec -n kube-system -c salt-master \ --kubeconfig /etc/kubernetes/admin.conf \ salt-master-bootstrap -- salt-run state.sls \ metalk8s.addons.ui.deployed saltenv=metalk8s-127.0.4-dev
MetalK8s Shell UI Configuration Customization¶
Default configuration for MetalK8s Shell UI can be overridden by editing its
Cluster and Service ConfigMap metalk8s-shell-ui-config
in namespace
metalk8s-ui
under the key data.config\.yaml
.
Changing UI OIDC Configuration¶
In order to adapt the OIDC configuration (e.g. the provider URL or the client ID) used by the UI shareable navigation bar (called Shell UI), you need to modify its ConfigMap.
For example, in order to replace the default client ID with “ui”, follow these steps:
Edit the ConfigMap:
root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \
edit configmap -n metalk8s-ui \
metalk8s-shell-ui-config
Add the following entry:
apiVersion: v1
kind: ConfigMap
data:
config.yaml: |-
apiVersion: addons.metalk8s.scality.com/v1alpha1
kind: ShellUIConfig
spec:
# [...]
oidc:
# [...]
clientId: "ui"
Apply your changes by running:
root@bootstrap $ kubectl exec -n kube-system -c salt-master \ --kubeconfig /etc/kubernetes/admin.conf \ salt-master-bootstrap -- salt-run state.sls \ metalk8s.addons.ui.deployed saltenv=metalk8s-127.0.4-dev
You can similarly edit the requested scopes through the “scopes” attribute or the OIDC provider URL through the “providerUrl” attribute.
MetalK8s Shell UI Workloadplane Configuration Customization¶
Default configuration for MetalK8s Shell UI workloadplane can be overridden by
editing its Cluster and Service ConfigMap workloadplane-shell-ui-config
in
namespace metalk8s-ui
under the key data.config\.yaml
.
Changing UI Menu Entries¶
To change the UI navigation menu entries on the workloadplane, follow these steps:
Edit the ConfigMap:
root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \
edit configmap -n metalk8s-ui \
workloadplane-shell-ui-config
Edit the
navbar
field. As an example, we add an entry to themain
section (there is also asubLogin
section):
apiVersion: v1
kind: ConfigMap
data:
config.yaml: |-
apiVersion: addons.metalk8s.scality.com/v1alpha1
kind: WorkloadplaneShellUIConfig
spec:
deployedApps:
- kind: ModuleFederatedAppKind,
name: appname,
version: x.y.z,
url: https://app.url,
appHistoryBasePath: ""
config:
# [...]
navbar:
# [...]
main:
# [...]
kind: ModuleFederatedAppKind
view: ViewToFederate
# Alternatively for a non federated app
isExternal: true
label:
en: Documentation
fr: Documentation
url: https://13.48.197.10:8443/docs/
Apply your changes by running:
root@bootstrap $ kubectl exec -n kube-system -c salt-master \ --kubeconfig /etc/kubernetes/admin.conf \ salt-master-bootstrap -- salt-run state.sls \ metalk8s.addons.ui.deployed saltenv=metalk8s-127.0.4-dev
Replicas Count Customization¶
MetalK8s administrators can scale the number of pods for any service mentioned below by changing the number of replicas which is by default set to a single pod per service.
Service
Namespace
ConfigMap
Alertmanager
metalk8s-monitoring
metalk8s-alertmanager-config
Grafana
metalk8s-grafana-config
Prometheus
metalk8s-prometheus-config
Dex
metalk8s-auth
metalk8s-dex-config
Loki
metalk8s-logging
metalk8s-loki-config
To change the number of replicas, perform the following operations:
From the Bootstrap node, edit the
ConfigMap
attributed to the service and then modify the replicas entry.root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \ edit configmap <ConfigMap> -n <Namespace>
Note
For each service, consult the Cluster Services table to obtain the
ConfigMap
and theNamespace
to be used for the above command.Make sure to replace <number-of-replicas> field with an integer value (For example 2).
[...] data: config.yaml: |- spec: deployment: replicas: <number-of-replicas> [...]
Save the ConfigMap changes.
From the Bootstrap node, execute the following command which connects to the Salt master container and applies salt-states to propagate the new changes down to the underlying services.
root@bootstrap $ kubectl exec --kubeconfig /etc/kubernetes/admin.conf \ -n kube-system -c salt-master salt-master-bootstrap \ -- salt-run state.sls metalk8s.deployed \ saltenv=metalk8s-127.0.4-dev
Note
Scaling the number of pods for services like
Prometheus
,Alertmanager
andLoki
requires provisioning extra persistent volumes for these pods to startup normally. Refer to this procedure for more information.