Prometheus
In a MetalK8s cluster, the Prometheus service records real-time metrics in a time series database. Prometheus can query a list of data sources called “exporters” at a specific polling frequency, and aggregate this data across the various sources.
Prometheus uses a special language, Prometheus Query Language (PromQL), to write alerting and recording rules.
Default Alert Rules
Alert rules enable a user to specify a condition that must occur before an external system like Slack is notified. For example, a MetalK8s administrator might want to raise an alert for any node that is unreachable for more than one minute.
Out of the box, MetalK8s ships with preconfigured alert rules, which are written as PromQL queries. The table below outlines all the preconfigured alert rules exposed from a newly deployed MetalK8s cluster.
To customize predefined alert rules, refer to Prometheus Configuration Customization.
Name |
Severity |
Description |
---|---|---|
AlertmanagerFailedReload |
critical |
Reloading an Alertmanager configuration has failed. |
AlertmanagerMembersInconsistent |
critical |
A member of an Alertmanager cluster has not found all other cluster members. |
AlertmanagerFailedToSendAlerts |
warning |
An Alertmanager instance failed to send notifications. |
AlertmanagerClusterFailedToSendAlerts |
critical |
All Alertmanager instances in a cluster failed to send notifications to a critical integration. |
AlertmanagerClusterFailedToSendAlerts |
warning |
All Alertmanager instances in a cluster failed to send notifications to a non-critical integration. |
AlertmanagerConfigInconsistent |
critical |
Alertmanager instances within the same cluster have different configurations. |
AlertmanagerClusterDown |
critical |
Half or more of the Alertmanager instances within the same cluster are down. |
AlertmanagerClusterCrashlooping |
critical |
Half or more of the Alertmanager instances within the same cluster are crashlooping. |
etcdInsufficientMembers |
critical |
etcd cluster “{{ $labels.job }}”: insufficient members ({{ $value }}). |
etcdNoLeader |
critical |
etcd cluster “{{ $labels.job }}”: member {{ $labels.instance }} has no leader. |
etcdHighNumberOfLeaderChanges |
warning |
etcd cluster “{{ $labels.job }}”: instance {{ $labels.instance }} has seen {{ $value }} leader changes within the last hour. |
etcdHighNumberOfFailedGRPCRequests |
warning |
etcd cluster “{{ $labels.job }}”: {{ $value }}% of requests for {{ $labels.grpc_method }} failed on etcd instance {{ $labels.instance }}. |
etcdHighNumberOfFailedGRPCRequests |
critical |
etcd cluster “{{ $labels.job }}”: {{ $value }}% of requests for {{ $labels.grpc_method }} failed on etcd instance {{ $labels.instance }}. |
etcdGRPCRequestsSlow |
critical |
etcd cluster “{{ $labels.job }}”: gRPC requests to {{ $labels.grpc_method }} are taking {{ $value }}s on etcd instance {{ $labels.instance }}. |
etcdMemberCommunicationSlow |
warning |
etcd cluster “{{ $labels.job }}”: member communication with {{ $labels.To }} is taking {{ $value }}s on etcd instance {{ $labels.instance }}. |
etcdHighNumberOfFailedProposals |
warning |
etcd cluster “{{ $labels.job }}”: {{ $value }} proposal failures within the last hour on etcd instance {{ $labels.instance }}. |
etcdHighFsyncDurations |
warning |
etcd cluster “{{ $labels.job }}”: 99th percentile fync durations are {{ $value }}s on etcd instance {{ $labels.instance }}. |
etcdHighCommitDurations |
warning |
etcd cluster “{{ $labels.job }}”: 99th percentile commit durations {{ $value }}s on etcd instance {{ $labels.instance }}. |
etcdHighNumberOfFailedHTTPRequests |
warning |
{{ $value }}% of requests for {{ $labels.method }} failed on etcd instance {{ $labels.instance }} |
etcdHighNumberOfFailedHTTPRequests |
critical |
{{ $value }}% of requests for {{ $labels.method }} failed on etcd instance {{ $labels.instance }}. |
etcdHTTPRequestsSlow |
warning |
etcd instance {{ $labels.instance }} HTTP requests to {{ $labels.method }} are slow. |
TargetDown |
warning |
One or more targets are unreachable. |
Watchdog |
none |
An alert that should always be firing to certify that Alertmanager is working properly. |
KubeAPIErrorBudgetBurn |
critical |
The API server is burning too much error budget. |
KubeAPIErrorBudgetBurn |
critical |
The API server is burning too much error budget. |
KubeAPIErrorBudgetBurn |
warning |
The API server is burning too much error budget. |
KubeAPIErrorBudgetBurn |
warning |
The API server is burning too much error budget. |
KubeStateMetricsListErrors |
critical |
kube-state-metrics is experiencing errors in list operations. |
KubeStateMetricsWatchErrors |
critical |
kube-state-metrics is experiencing errors in watch operations. |
KubeStateMetricsShardingMismatch |
critical |
kube-state-metrics sharding is misconfigured. |
KubeStateMetricsShardsMissing |
critical |
kube-state-metrics shards are missing. |
KubePodCrashLooping |
warning |
Pod is crash looping. |
KubePodNotReady |
warning |
Pod has been in a non-ready state for more than 15 minutes. |
KubeDeploymentGenerationMismatch |
warning |
Deployment generation mismatch due to possible roll-back |
KubeDeploymentReplicasMismatch |
warning |
Deployment has not matched the expected number of replicas. |
KubeStatefulSetReplicasMismatch |
warning |
Deployment has not matched the expected number of replicas. |
KubeStatefulSetGenerationMismatch |
warning |
StatefulSet generation mismatch due to possible roll-back |
KubeStatefulSetUpdateNotRolledOut |
warning |
StatefulSet update has not been rolled out. |
KubeDaemonSetRolloutStuck |
warning |
DaemonSet rollout is stuck. |
KubeContainerWaiting |
warning |
Pod container waiting longer than 1 hour |
KubeDaemonSetNotScheduled |
warning |
DaemonSet pods are not scheduled. |
KubeDaemonSetMisScheduled |
warning |
DaemonSet pods are misscheduled. |
KubeJobCompletion |
warning |
Job did not complete in time |
KubeJobFailed |
warning |
Job failed to complete. |
KubeHpaReplicasMismatch |
warning |
HPA has not matched descired number of replicas. |
KubeHpaMaxedOut |
warning |
HPA is running at max replicas |
KubeCPUOvercommit |
warning |
Cluster has overcommitted CPU resource requests. |
KubeMemoryOvercommit |
warning |
Cluster has overcommitted memory resource requests. |
KubeCPUQuotaOvercommit |
warning |
Cluster has overcommitted CPU resource requests. |
KubeMemoryQuotaOvercommit |
warning |
Cluster has overcommitted memory resource requests. |
KubeQuotaAlmostFull |
info |
Namespace quota is going to be full. |
KubeQuotaFullyUsed |
info |
Namespace quota is fully used. |
KubeQuotaExceeded |
warning |
Namespace quota has exceeded the limits. |
CPUThrottlingHigh |
info |
Processes experience elevated CPU throttling. |
KubePersistentVolumeFillingUp |
critical |
PersistentVolume is filling up. |
KubePersistentVolumeFillingUp |
warning |
PersistentVolume is filling up. |
KubePersistentVolumeErrors |
critical |
PersistentVolume is having issues with provisioning. |
KubeClientCertificateExpiration |
warning |
Client certificate is about to expire. |
KubeClientCertificateExpiration |
critical |
Client certificate is about to expire. |
AggregatedAPIErrors |
warning |
An aggregated API has reported errors. |
AggregatedAPIDown |
warning |
An aggregated API is down. |
KubeAPIDown |
critical |
Target disappeared from Prometheus target discovery. |
KubeAPITerminatedRequests |
warning |
The apiserver has terminated {{ $value | humanizePercentage }} of its incoming requests. |
KubeControllerManagerDown |
critical |
Target disappeared from Prometheus target discovery. |
KubeNodeNotReady |
warning |
Node is not ready. |
KubeNodeUnreachable |
warning |
Node is unreachable. |
KubeletTooManyPods |
warning |
Kubelet is running at capacity. |
KubeNodeReadinessFlapping |
warning |
Node readiness status is flapping. |
KubeletPlegDurationHigh |
warning |
Kubelet Pod Lifecycle Event Generator is taking too long to relist. |
KubeletPodStartUpLatencyHigh |
warning |
Kubelet Pod startup latency is too high. |
KubeletClientCertificateExpiration |
warning |
Kubelet client certificate is about to expire. |
KubeletClientCertificateExpiration |
critical |
Kubelet client certificate is about to expire. |
KubeletServerCertificateExpiration |
warning |
Kubelet server certificate is about to expire. |
KubeletServerCertificateExpiration |
critical |
Kubelet server certificate is about to expire. |
KubeletClientCertificateRenewalErrors |
warning |
Kubelet has failed to renew its client certificate. |
KubeletServerCertificateRenewalErrors |
warning |
Kubelet has failed to renew its server certificate. |
KubeletDown |
critical |
Target disappeared from Prometheus target discovery. |
KubeSchedulerDown |
critical |
Target disappeared from Prometheus target discovery. |
KubeVersionMismatch |
warning |
Different semantic versions of Kubernetes components running. |
KubeClientErrors |
warning |
Kubernetes API server client is experiencing errors. |
NodeFilesystemSpaceFillingUp |
warning |
Filesystem is predicted to run out of space within the next 24 hours. |
NodeFilesystemSpaceFillingUp |
critical |
Filesystem is predicted to run out of space within the next 4 hours. |
NodeFilesystemAlmostOutOfSpace |
warning |
Filesystem has less than 20% space left. |
NodeFilesystemAlmostOutOfSpace |
critical |
Filesystem has less than 12% space left. |
NodeFilesystemFilesFillingUp |
warning |
Filesystem is predicted to run out of inodes within the next 24 hours. |
NodeFilesystemFilesFillingUp |
critical |
Filesystem is predicted to run out of inodes within the next 4 hours. |
NodeFilesystemAlmostOutOfFiles |
warning |
Filesystem has less than 15% inodes left. |
NodeFilesystemAlmostOutOfFiles |
critical |
Filesystem has less than 8% inodes left. |
NodeNetworkReceiveErrs |
warning |
Network interface is reporting many receive errors. |
NodeNetworkTransmitErrs |
warning |
Network interface is reporting many transmit errors. |
NodeHighNumberConntrackEntriesUsed |
warning |
Number of conntrack are getting close to the limit |
NodeClockSkewDetected |
warning |
Clock on {{ $labels.instance }} is out of sync by more than 300s. Ensure NTP is configured correctly on this host. |
NodeClockNotSynchronising |
warning |
Clock on {{ $labels.instance }} is not synchronising. Ensure NTP is configured on this host. |
NodeTextFileCollectorScrapeError |
warning |
Node Exporter text file collector failed to scrape. |
NodeRAIDDegraded |
critical |
RAID Array is degraded |
NodeRAIDDiskFailure |
warning |
Failed device in RAID array |
NodeNetworkInterfaceFlapping |
warning |
Network interface “{{ $labels.device }}” changing it’s up status often on node-exporter {{ $labels.namespace }}/{{ $labels.pod }} |
PrometheusOperatorListErrors |
warning |
Errors while performing list operations in controller. |
PrometheusOperatorWatchErrors |
warning |
Errors while performing watch operations in controller. |
PrometheusOperatorSyncFailed |
warning |
Last controller reconciliation failed |
PrometheusOperatorReconcileErrors |
warning |
Errors while reconciling controller. |
PrometheusOperatorNodeLookupErrors |
warning |
Errors while reconciling Prometheus. |
PrometheusOperatorNotReady |
warning |
Prometheus operator not ready |
PrometheusOperatorRejectedResources |
warning |
Resources rejected by Prometheus operator |
PrometheusBadConfig |
critical |
Failed Prometheus configuration reload. |
PrometheusNotificationQueueRunningFull |
warning |
Prometheus alert notification queue predicted to run full in less than 30m. |
PrometheusErrorSendingAlertsToSomeAlertmanagers |
warning |
Prometheus has encountered more than 1% errors sending alerts to a specific Alertmanager. |
PrometheusNotConnectedToAlertmanagers |
warning |
Prometheus is not connected to any Alertmanagers. |
PrometheusTSDBReloadsFailing |
warning |
Prometheus has issues reloading blocks from disk. |
PrometheusTSDBCompactionsFailing |
warning |
Prometheus has issues compacting blocks. |
PrometheusNotIngestingSamples |
warning |
Prometheus is not ingesting samples. |
PrometheusDuplicateTimestamps |
warning |
Prometheus is dropping samples with duplicate timestamps. |
PrometheusOutOfOrderTimestamps |
warning |
Prometheus drops samples with out-of-order timestamps. |
PrometheusRemoteStorageFailures |
critical |
Prometheus fails to send samples to remote storage. |
PrometheusRemoteWriteBehind |
critical |
Prometheus remote write is behind. |
PrometheusRemoteWriteDesiredShards |
warning |
Prometheus remote write desired shards calculation wants to run more than configured max shards. |
PrometheusRuleFailures |
critical |
Prometheus is failing rule evaluations. |
PrometheusMissingRuleEvaluations |
warning |
Prometheus is missing rule evaluations due to slow rule group evaluation. |
PrometheusTargetLimitHit |
warning |
Prometheus has dropped targets because some scrape configs have exceeded the targets limit. |
PrometheusLabelLimitHit |
warning |
Prometheus has dropped targets because some scrape configs have exceeded the labels limit. |
PrometheusErrorSendingAlertsToAnyAlertmanager |
critical |
Prometheus encounters more than 3% errors sending alerts to any Alertmanager. |
Snapshot Prometheus Database
To snapshot the database, you must first enable the Prometheus admin API.
To generate a snapshot, use the sosreport utility with the following options:
root@host # sosreport --batch --build -o metalk8s -kmetalk8s.prometheus-snapshot=True
The name of the generated archive is printed on the console output and
the Prometheus snapshot can be found under prometheus_snapshot
directory.
Warning
You must ensure you have sufficient disk space (at least the size
of the Prometheus volume) under /var/tmp
or change the archive
destination with --tmp-dir=<new_dest>
option.