Cluster and Services Configurations

This section contains information describing the list of available Cluster and Services Configurations including procedures for customizing and applying any given Cluster and Services Configurations.

Default Service Configurations

MetalK8s addons (Alertmanager, Dex, Grafana and Prometheus) ships with default runtime service configurations required for basic service deployment. Find below an exhaustive list of available default Service Configurations deployed in a MetalK8s cluster.

Alertmanager Default Configuration

Alertmanager handles alerts sent by Prometheus. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email, PagerDuty, or OpsGenie. It also takes care of silencing and inhibition of alerts.

The default configuration values for Alertmanager are specified below:

# Configuration of the Alertmanager service
apiVersion: addons.metalk8s.scality.com
kind: AlertmanagerConfig
spec:
  # Configure the Alertmanager Deployment
  deployment:
    replicas: 1
  notification:
    config:
      global:
        resolve_timeout: 5m
      templates: []
      route:
        group_by: ['job']
        group_wait: 30s
        group_interval: 5m
        repeat_interval: 12h
        receiver: 'null'
        routes:
        - match:
            alertname: Watchdog
          receiver: 'null'
      receivers:
        - name: 'null'
      inhibit_rules: []

See Alertmanager Configuration Customization to override these defaults.

Dex Default Configuration

Dex is an Identity Provider that drives user authentication and identity management in a MetalK8s cluster.

The default configuration values for Dex are specified below:

# Defaults for configuration of Dex (OIDC)
apiVersion: addons.metalk8s.scality.com/v1alpha2
kind: DexConfig
spec:
  # Deployment configuration
  deployment:
    replicas: 2

  # Dex server configuration
  config:
    issuer: https://{{ grains.metalk8s.control_plane_ip }}:8443/oidc

    storage:
      config:
        inCluster: true
      type: kubernetes

    logger:
      level: debug

    web:
      https: 0.0.0.0:5556
      tlsCert: /etc/dex/tls/https/server/tls.crt
      tlsKey: /etc/dex/tls/https/server/tls.key

    frontend:
      theme: scality
      issuer: MetalK8s

    connectors: []

    oauth2:
      alwaysShowLoginScreen: true
      skipApprovalScreen: true
      responseTypes: ["code", "token", "id_token"]

    expiry:
      signingKeys: "6h"
      idTokens: "24h"

    staticClients:
    - id: oidc-auth-client
      name: oidc-auth-client
      redirectURIs:
      - urn:ietf:wg:oauth:2.0:oob
      secret: lkfa9jaf3kfakqyeoikfjakf93k2l
      trustedPeers:
      - metalk8s-ui
      - grafana-ui
    - id: metalk8s-ui
      name: MetalK8s UI
      redirectURIs:
      - https://{{ grains.metalk8s.control_plane_ip }}:8443/oauth2/callback
      secret: ybrMJpVMQxsiZw26MhJzCjA2ut
    - id: grafana-ui
      name: Grafana UI
      redirectURIs:
      - https://{{ grains.metalk8s.control_plane_ip }}:8443/grafana/login/generic_oauth
      secret: 4lqK98NcsWG5qBRHJUqYM1

    enablePasswordDB: true
    staticPasswords: []

See Dex Configuration Customization for Dex configuration customizations.

Grafana Default Configuration

Grafana is a web interface used to visualize and analyze metrics scraped by Prometheus, with nice graphs.

The default configuration values for Grafana are specified below:

# Configuration of the Grafana service
apiVersion: addons.metalk8s.scality.com
kind: GrafanaConfig
spec:
  # Configure the Grafana Deployment
  deployment:
    replicas: 1

Prometheus Default Configuration

Prometheus is responsible for monitoring all the applications and systems in the MetalK8s cluster. It scrapes and stores various metrics from these systems and then analyze them against a set of alerting rules. If a rule matches, Prometheus sends an alert to Alertmanager.

The default configuration values for Prometheus are specified below:

# Configuration of the Prometheus service
apiVersion: addons.metalk8s.scality.com
kind: PrometheusConfig
spec:
  # Configure the Prometheus Deployment
  deployment:
    replicas: 1
  rules:
    node_exporter:
      node_filesystem_space_filling_up:
        warning:
          hours: 24  # Hours before there is no space left
          threshold: 40  # Min space left to trigger prediction
        critical:
          hours: 4
          threshold: 20
      node_filesystem_almost_out_of_space:
        warning:
          available: 5  # Percentage of free space left
        critical:
          available: 3
      node_filesystem_files_filling_up:
        warning:
          hours: 24  # Hours before there is no inode left
          threshold: 40  # Min space left to trigger prediction
        critical:
          hours: 4
          threshold: 20
      node_filesystem_almost_out_of_files:
        warning:
          available: 5  # Percentage of free inodes left
        critical:
          available: 3
      node_network_receive_errors:
        warning:
          errors: 10  # Number of receive errors for the last 2m
      node_network_transmit_errors:
        warning:
          errors: 10  # Number of transmit errors for the last 2m
      node_high_number_conntrack_entries_used:
        warning:
          threshold: 0.75
      node_clock_skew_detected:
        warning:
          threshold:
            high: 0.05
            low: -0.05
      node_clock_not_synchronising:
        warning:
          threshold: 0

Loki Default Configuration

Loki is a log aggregation system, its job is to receive logs from collectors (fluent-bit), store them on persistent storage, then make them queryable through its API.

The default configuration values for Loki are specified below:

# Configuration of the Loki service
apiVersion: addons.metalk8s.scality.com
kind: LokiConfig
spec:
  deployment:
    replicas: 1
  config:
    auth_enabled: false
    chunk_store_config:
      max_look_back_period: 0s
    ingester:
      chunk_block_size: 262144
      chunk_idle_period: 3m
      chunk_retain_period: 1m
      lifecycler:
        ring:
          kvstore:
            store: inmemory
          replication_factor: 1
      max_transfer_retries: 0
    limits_config:
      enforce_metric_name: false
      reject_old_samples: true
      reject_old_samples_max_age: 168h
    schema_config:
      configs:
      - from: 2018-04-15
        index:
          period: 168h
          prefix: index_
        object_store: filesystem
        schema: v9
        store: boltdb
    server:
      http_listen_port: 3100
    storage_config:
      boltdb:
        directory: /data/loki/index
      filesystem:
        directory: /data/loki/chunks
    table_manager:
      retention_deletes_enabled: true
      retention_period: 336h

Service Configurations Customization

Alertmanager Configuration Customization

Default configuration for Alertmanager can be overridden by editing its Cluster and Service ConfigMap metalk8s-alertmanager-config in namespace metalk8s-monitoring under the key data.config\.yaml:

root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \
                   edit configmap -n metalk8s-monitoring \
                   metalk8s-alertmanager-config

The following documentation is not exhaustive and is just here to give some hints on basic usage, for more details or advanced configuration, see the official Alertmanager documentation.

Adding inhibition rule for an alert

Alert inhibition rules allow making one alert inhibit notifications for some other alerts.

For example, inhibiting alerts with a warning severity when there is the same alert with a critical severity.

apiVersion: v1
kind: ConfigMap
data:
  config.yaml: |-
    apiVersion: addons.metalk8s.scality.com
    kind: AlertmanagerConfig
    spec:
      notification:
        config:
          inhibit_rules:
            - source_match:
                severity: critical
              target_match:
                severity: warning
              equal:
                - alertname

Adding receivers

Receivers allow configuring where the alert notifications are sent.

Here is a simple Slack receiver which makes Alertmanager send all notifications to a specific Slack channel.

apiVersion: v1
kind: ConfigMap
data:
  config.yaml: |-
    apiVersion: addons.metalk8s.scality.com
    kind: AlertmanagerConfig
    spec:
      notification:
        config:
          global:
            slack_api_url: https://hooks.slack.com/services/ABCDEFGHIJK
          route:
            receiver: slack-receiver
          receivers:
            - name: slack-receiver
              slack_configs:
                - channel: '#<your-channel>'
                  send_resolved: true

You can find documentation here to activate incoming webhooks for your Slack workspace and retrieve the slack_api_url value.

Another example, with email receiver.

apiVersion: v1
kind: ConfigMap
data:
  config.yaml: |-
    apiVersion: addons.metalk8s.scality.com
    kind: AlertmanagerConfig
    spec:
      notification:
        config:
          route:
            receiver: email-receiver
          receivers:
            - name: email-receiver
              email_configs:
                - to: <your-address>@<your-domain.tld>
                  from: alertmanager@<your-domain.tld>
                  smarthost: <smtp.your-domain.tld>:587
                  auth_username: alertmanager@<your-domain.tld>
                  auth_identity: alertmanager@<your-domain.tld>
                  auth_password: <password>
                  send_resolved: true

There are more receivers available (PagerDuty, OpsGenie, HipChat, …).

Applying configuration

Any changes made to metalk8s-alertmanager-config ConfigMap must then be applied with Salt.

root@bootstrap $ kubectl exec --kubeconfig /etc/kubernetes/admin.conf \
                   -n kube-system -c salt-master salt-master-bootstrap -- \
                   salt-run state.sls \
                   metalk8s.addons.prometheus-operator.deployed \
                   saltenv=metalk8s-2.6.1

Prometheus Configuration Customization

Predefined Alert Rules Customization

A subset of the predefined Alert rules can be customized, the exhaustive list can be found here.

To change these Alert rules thresholds, the metalk8s-prometheus-config ConfigMap in namespace metalk8s-monitoring must be edited as follows.

root@bootstrap $ kubectl edit --kubeconfig=/etc/kubernetes/admin.conf \
                   configmap -n metalk8s-monitoring \
                   metalk8s-prometheus-config

Then, add the rules you want to override under the data.config.yaml key. For example, to change the threshold for the disk space alert (% of free space left) from 5% to 10%, simply do:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metalk8s-prometheus-config
  namespace: metalk8s-monitoring
data:
  config.yaml: |-
    apiVersion: addons.metalk8s.scality.com
    kind: PrometheusConfig
    spec:
      rules:
        node_exporter:
          node_filesystem_almost_out_of_space:
            warning:
              available: 10

The new configuration must then be applied with Salt.

root@bootstrap $ kubectl exec --kubeconfig /etc/kubernetes/admin.conf \
                   -n kube-system -c salt-master salt-master-bootstrap -- \
                   salt-run state.sls \
                   metalk8s.addons.prometheus-operator.deployed \
                   saltenv=metalk8s-2.6.1

Adding New Rules

Alerting rules allow defining alert conditions based on PromQL expressions and to send notifications about these alerts to Alertmanager.

In order to add Alert rules, a new PrometheusRule manifest must be created.

---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  labels:
    app: prometheus-operator
    app.kubernetes.io/name: prometheus-operator
  name: <prometheus-rule-name>
  namespace: <namespace-name>
spec:
  groups:
  - name: <rules-group-name>
    rules:
    - alert: <alert-rule-name>
      annotations:
        description: "some description"
        summary: "alert summary"
      expr: <PromQL-expression>
      for: 1h
      labels:
        severity: warning

Then this manifest must be applied.

root@bootstrap $ kubectl --kubeconfig=/etc/kubernetes/admin.conf \
                   apply -f <path-to-the-manifest>

For more details on Alert Rules, see the official Prometheus alerting rules documentation

Dex Configuration Customization

Enable or Disable the Static User Store

Dex includes a local store of users and their passwords, which is enabled by default.

Important

To continue using MetalK8s OIDC (especially for MetalK8s UI and Grafana) in case of the loss of external identity providers, it is advised to keep the static user store enabled.

To disable (resp. enable) it, perform the following steps:

  1. Set the enablePasswordDB configuration flag to false (resp. true):

    root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \
                       edit configmap metalk8s-dex-config -n metalk8s-auth
    
    # [...]
    data:
      config.yaml: |-
        apiVersion: addons.metalk8s.scality.com/v1alpha2
        kind: DexConfiguration
        spec:
          # [...]
          config:
            # [...]
            enablePasswordDB: false  # or true
    
  2. Apply your changes:

    root@bootstrap $ kubectl exec -n kube-system -c salt-master \
                       --kubeconfig /etc/kubernetes/admin.conf \
                       salt-master-bootstrap -- salt-run state.sls \
                       metalk8s.addons.dex.deployed saltenv=metalk8s-2.6.1

Add a Static User

To add a static user, perform the following operations from the Bootstrap node:

  1. Generate a bcrypt hash of your password:

    root@bootstrap $ htpasswd -nBC 14 "" | tr -d ':'
    New password:
    Re-type new password:
    <your hash here, starting with "$2y$14$">
    
  2. Generate a unique identifier:

    root@bootstrap $ python -c 'import uuid; print uuid.uuid4()'
    
  3. Add a new entry in the staticPasswords list, using the password hash and user ID generated before, and choosing a new email and username:

    root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \
                       edit configmap metalk8s-dex-config -n metalk8s-auth
    
    # [...]
    data:
      config.yaml: |-
        apiVersion: addons.metalk8s.scality.com/v1alpha2
        kind: DexConfiguration
        spec:
          # [...]
          config:
            # [...]
            staticPasswords:
              # [...]
              - email: "<email>"
                hash: "<generated-password-hash>"
                username: "<username>"
                userID: "<generated-identifier>"
    
  4. Apply your changes:

    root@bootstrap $ kubectl exec -n kube-system -c salt-master \
                       --kubeconfig /etc/kubernetes/admin.conf \
                       salt-master-bootstrap -- salt-run state.sls \
                       metalk8s.addons.dex.deployed saltenv=metalk8s-2.6.1
  5. Bind the user to an existing (Cluster)Role using this procedure

  6. Verify that the user has been successfully added and you can log in to the MetalK8s UI using the new email and password

Change Password for an Existing Static User

Important

Default admin user

A new MetalK8s install comes provisioned with a default admin account, with a predefined password (see this section). It is strongly recommended that you change this password as soon as possible, especially if your control plane network is accessible to untrusted clients.

To change the password of an existing user, perform the following operations from the Bootstrap node:

  1. Generate a bcrypt hash of the new password using this procedure

  2. Find the entry for the selected user in the staticPasswords list, and update its hash:

    root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \
                       edit configmap metalk8s-dex-config -n metalk8s-auth
    
    # [...]
    data:
      config.yaml: |-
        apiVersion: addons.metalk8s.scality.com/v1alpha2
        kind: DexConfiguration
        spec:
          # [...]
          config:
            # [...]
            staticPasswords:
              # [...]
              - email: "<previous-email>"
                hash: "<new-password-hash>"
                username: "<previous-username>"
                userID: "<previous-identifier>"
              # [...]
    
  3. Apply your changes:

    root@bootstrap $ kubectl exec -n kube-system -c salt-master \
                       --kubeconfig /etc/kubernetes/admin.conf \
                       salt-master-bootstrap -- salt-run state.sls \
                       metalk8s.addons.dex.deployed saltenv=metalk8s-2.6.1
  4. Verify that the password has been changed and you can log in to the MetalK8s UI using the new password

Additional Configurations

All configuration options exposed by Dex can be changed by following a similar procedure to the ones documented above. Refer to Dex documentation for an exhaustive explanation of what is supported.

To define (or override) any configuration option, follow these steps:

  1. Add (or change) the corresponding field under the spec.config key of the metalk8s-auth/metalk8s-dex-config ConfigMap:

    root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \
                       edit configmap metalk8s-dex-config -n metalk8s-auth
    

    For example, registering a client application with Dex can be done by adding a new entry under staticClients:

    # [...]
    data:
      config.yaml: |-
        apiVersion: addons.metalk8s.scality.com/v1alpha2
        kind: DexConfiguration
        spec:
          # [...]
          config:
            # [...]
            staticClients:
            - id: example-app
              secret: example-app-secret
              name: 'Example App'
              # Where the app will be running.
              redirectURIs:
              - 'http://127.0.0.1:5555/callback'
    
  2. Apply your changes by running:

    root@bootstrap $ kubectl exec -n kube-system -c salt-master \
                       --kubeconfig /etc/kubernetes/admin.conf \
                       salt-master-bootstrap -- salt-run state.sls \
                       metalk8s.addons.dex.deployed saltenv=metalk8s-2.6.1

Todo

Add documentation for the following:

  • External authentication (#2013)

    • Configuring LDAP

    • Configuring Active Directory (AD)

Loki Configuration Customization

Default configuration for Loki can be overridden by editing its Cluster and Service ConfigMap metalk8s-loki-config in namespace metalk8s-logging under the key data.config.yaml:

root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \
                   edit configmap -n metalk8s-logging \
                   metalk8s-loki-config

The following documentation is not exhaustive and is just here to give some hints on basic usage, for more details or advanced configuration, see the official Loki documentation.

Changing the logs retention period

Retention period is the time the logs will be stored and available before getting purged.

For example, to set the retention period to 1 week, the ConfigMap must be edited as follows:

apiVersion: v1
kind: ConfigMap
data:
  config.yaml: |-
    apiVersion: addons.metalk8s.scality.com
    kind: LokiConfig
    spec:
      config:
        table_manager:
          retention_period: 168h

Note

Due to internal implementation, retention_period must be a multiple of 24h in order to get the expected behavior

Replicas Count Customization

MetalK8s administrators can scale the number of pods for any service mentioned below by changing the number of replicas which is by default set to a single pod per service.

Service

Namespace

ConfigMap

Alertmanager

metalk8s-monitoring

metalk8s-alertmanager-config

Grafana

metalk8s-grafana-config

Prometheus

metalk8s-prometheus-config

Dex

metalk8s-auth

metalk8s-dex-config

Loki

metalk8s-logging

metalk8s-loki-config

To change the number of replicas, perform the following operations:

  1. From the Bootstrap node, edit the ConfigMap attributed to the service and then modify the replicas entry.

    root@bootstrap $ kubectl --kubeconfig /etc/kubernetes/admin.conf \
                       edit configmap <ConfigMap> -n <Namespace>
    

    Note

    For each service, consult the Cluster Services table to obtain the ConfigMap and the Namespace to be used for the above command.

    Make sure to replace <number-of-replicas> field with an integer value (For example 2).

    [...]
    data:
       config.yaml: |-
          spec:
             deployment:
                replicas: <number-of-replicas>
    [...]
    
  2. Save the ConfigMap changes.

  3. From the Bootstrap node, execute the following command which connects to the Salt master container and applies salt-states to propagate the new changes down to the underlying services.

    root@bootstrap $ kubectl exec --kubeconfig /etc/kubernetes/admin.conf \
                       -n kube-system -c salt-master salt-master-bootstrap \
                       -- salt-run state.sls metalk8s.deployed \
                       saltenv=metalk8s-2.6.1

    Note

    Scaling the number of pods for services like Prometheus, Alertmanager and Loki requires provisioning extra persistent volumes for these pods to startup normally. Refer to this procedure for more information.