Prometheus入门教程之服务发现(四)

82次阅读

共计 9843 个字符,预计需要花费 25 分钟才能阅读完成。

概述

Prometheus所有 scrape 的目标需要通过配置文件告知Prometheus,需要解决每次都去修改配置文件然后再通知Prometheus重新加载的问题。因此,服务发现(service discovery)就是为了解决此类需求出现的,Prometheus能够主动感知系统增加、删除、更新的服务,然后自动将目标加入到监控队列中。

scrape_configs

定义收集规则

其中每一个scrape_config对象对应一个数据采集的Job,每一个Job可以对应多个Instance,即配置文件中的targets。 在高级配置中,这可能会改变。
目标可以通过<static_configs>参数静态配置,也可以使用其中一种支持的服务发现机制动态发现。
此外,<relabel_configs>允许在抓取之前对任何目标及其标签进行高级修改。
其中<job_name>在所有scrape配置中必须是唯一的。

# The job name assigned to scraped metrics by default.
# 默认分配给已抓取指标的job名称。
job_name: <job_name>

# How frequently to scrape targets from this job.
# 从job中抓取目标的频率.
[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]

# Per-scrape timeout when scraping this job.
# 抓取此job时,每次抓取超时时间.
[ scrape_timeout: <duration> | default = <global_config.scrape_timeout> ]

# The HTTP resource path on which to fetch metrics from targets.
# 从目标获取指标的HTTP资源路径.
[ metrics_path: <path> | default = /metrics ]

# honor_labels controls how Prometheus handles conflicts between labels that are
# already present in scraped data and labels that Prometheus would attach
# server-side ("job" and "instance" labels, manually configured target
# labels, and labels generated by service discovery implementations).
#
# If honor_labels is set to "true", label conflicts are resolved by keeping label
# values from the scraped data and ignoring the conflicting server-side labels.
#
# If honor_labels is set to "false", label conflicts are resolved by renaming
# conflicting labels in the scraped data to "exported_<original-label>" (for
# example "exported_instance", "exported_job") and then attaching server-side
# labels.
#
# Setting honor_labels to "true" is useful for use cases such as federation and
# scraping the Pushgateway, where all labels specified in the target should be
# preserved.
#
# Note that any globally configured "external_labels" are unaffected by this
# setting. In communication with external systems, they are always applied only
# when a time series does not have a given label yet and are ignored otherwise.
# honor_labels控制Prometheus如何处理已经存在于已抓取数据中的标签与Prometheus将附加服务器端的标签之间的冲突("job"和"instance"标签,手动配置的目标标签以及服务发现实现生成的标签)。
# 如果honor_labels设置为"true",则通过保留已抓取数据的标签值并忽略冲突的服务器端标签来解决标签冲突。
# 如果honor_labels设置为"false",则通过将已抓取数据中的冲突标签重命名为"exported_ <original-label>"(例如"exported_instance","exported_job")然后附加服务器端标签来解决标签冲突。 这对于联合等用例很有用,其中应保留目标中指定的所有标签。
# 请注意,任何全局配置的"external_labels"都不受此设置的影响。 在与外部系统通信时,它们始终仅在时间序列尚未具有给定标签时应用,否则将被忽略。
[ honor_labels: <boolean> | default = false ]

# honor_timestamps controls whether Prometheus respects the timestamps present
# in scraped data.
#
# If honor_timestamps is set to "true", the timestamps of the metrics exposed
# by the target will be used.
#
# If honor_timestamps is set to "false", the timestamps of the metrics exposed
# by the target will be ignored.
[ honor_timestamps: <boolean> | default = true ]

# Configures the protocol scheme used for requests.
# 配置用于请求的协议方案.
[ scheme: <scheme> | default = http ]

# Optional HTTP URL parameters.
# 可选的HTTP URL参数.
params:
  [ <string>: [<string>, ...] ]

# Sets the `Authorization` header on every scrape request with the
# configured username and password.
# password and password_file are mutually exclusive.
# 使用配置的用户名和密码在每个scrape请求上设置`Authorization`标头。 password和password_file是互斥的。
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Sets the `Authorization` header on every scrape request with
# the configured bearer token. It is mutually exclusive with `bearer_token_file`.
[ bearer_token: <secret> ]

# Sets the `Authorization` header on every scrape request with the bearer token
# read from the configured file. It is mutually exclusive with `bearer_token`.
[ bearer_token_file: /path/to/bearer/token/file ]

###############################################
###############################################
# Configures the scrape request's TLS settings.
# 配置scrape请求的TLS设置.
tls_config:
  [ <tls_config> ]
# 用于验证API服务器证书的CA证书。
[ ca_file: <filename> ]
# 用于服务器的客户端证书身份验证的证书和密钥文件。
[ cert_file: <filename> ]
[ key_file: <filename> ]
# ServerName扩展名,用于指示服务器的名称。
# https://tools.ietf.org/html/rfc4366#section-3.1
[ server_name: <string> ]
# 禁用服务器证书的验证。
[ insecure_skip_verify: <boolean> ]
###############################################
###############################################

# Optional proxy URL.
# 可选的代理URL.
[ proxy_url: <string> ]

# List of Kubernetes service discovery configurations.
# Kubernetes服务发现配置列表。
kubernetes_sd_configs:
  [ - <kubernetes_sd_config> ... ]

# List of labeled statically configured targets for this job.
# 此job的标记静态配置目标列表。
static_configs:
  [ - <static_config> ... ]

# List of target relabel configurations.对服务发现的目标进行重新标记
# 目标重新标记配置列表。
relabel_configs:
  [ - <relabel_config> ... ]

# List of metric relabel configurations.抓取目标后,被保存之前。可以确定哪些指标需要保存或丢弃
# 度量标准重新配置列表。
metric_relabel_configs:
  [ - <relabel_config> ... ]

# Per-scrape limit on number of scraped samples that will be accepted.
# If more than this number of samples are present after metric relabelling
# the entire scrape will be treated as failed. 0 means no limit.
# 对每个将被接受的样本数量的每次抓取限制。
# 如果在度量重新标记后存在超过此数量的样本,则整个抓取将被视为失败。 0表示没有限制。
[ sample_limit: <int> | default = 0 ]

#还有一些其他配置如:自动发现下列服务
azure_sd_configs:
consul_sd_configs:
dns_sd_configs:
ec2_sd_configs:
openstack_sd_configs:
file_sd_configs:
gce_sd_configs:
marathon_sd_configs:
nerve_sd_configs:
serverset_sd_configs:
triton_sd_configs:

1 file_sd_configs

基于文件的服务发现

JSON格式文件的服务发现:

[root@localhost ~]# cd /usr/local/prometheus/
[root@localhost prometheus]# mkdir targets
[root@localhost prometheus]# cat targets/dev_node.json 
[
  {
    "targets": [ "192.168.1.5:9090","127.0.0.1:9090" ],
    "labels": {
      "env": "dev_webgame"
    }
  }
]
[root@localhost prometheus]# cat prometheus.yml
  - job_name: 'node_service_discovery'
    file_sd_configs:
    - files: 
      - targets/*.json
      refresh_interval: 60m
[root@localhost prometheus]# systemctl restart prometheus

配置文件说明:
file_sd_configs,指定prometheus基于文件的服务发现配置使用的选项
files,自定义的和prometheus程序同级目录的targets目录,要被自动加载的所有.json格式的文件。当然也可以单独指定某一个JSON格式的文件。
refresh_interval: 60m,自定义刷新间隔时间为60秒

YAML格式文件的服务发现

[root@localhost prometheus]# cat targets/dev_node.yaml 
- targets:
  - "192.168.1.30:9100"

[root@localhost prometheus]# cat prometheus.yml
  - job_name: 'node_service_discovery'
    file_sd_configs:
    - files:
      - targets/*.json
      refresh_interval: 60m
    - files:
      - targets/*.yaml
      refresh_interval: 60m
[root@localhost prometheus]# systemctl restart prometheus

2 consul_sd_configs

基于consul的服务发现

2.1 安装 consul

[root@localhost opt]# ll consul_1.7.3_linux_amd64.zip 
-rw-r--r--. 1 root root 39717645 May 19 03:50 consul_1.7.3_linux_amd64.zip
[root@localhost opt]# mkdir /usr/local/consul
[root@localhost opt]# unzip consul_1.7.3_linux_amd64.zip -d /usr/local/consul/

2.2 启动 consul

Consul 必须启动 agent 才可使用,它是运行在 Consul 集群中每个成员上的守护进程,该进程负责维护集群中成员信息、注册服务、查询响应、运行检查等功能。Agent 指令是 Consul 的核心,可以运行为Server或Client模式,操作如下:

[root@localhost ~]# cd /usr/local/consul/
[root@localhost consul]# ./consul agent -dev
==> Starting Consul agent...
           Version: 'v1.7.3'
           Node ID: '950de751-f475-f7b4-cc8e-624ef85be6e3'
         Node name: 'localhost.localdomain'
        Datacenter: 'dc1' (Segment: '<all>')
            Server: true (Bootstrap: false)
       Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600)
      Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false

2.3 服务注册发现

Consul 服务注册提供了两种注册方法:

一种是定义配置文件服务注册方法,即在配置文件中定义服务来进行注册;

一种是HTTP API服务注册方法,即在启动后有服务自身通过调用API进行自我注册。

方法一:将本地运行的node_exporter通过服务的方式进行Consul服务注册。

[root@localhost consul]# mkdir -p /usr/local/consul/consul.d
[root@localhost consul]# cd /usr/local/consul/consul.d/
[root@localhost consul.d]# cat node_exporter.json 
{
  "service": {
    "id": "node_exporter",
    "name": "node_exporter",
    "tags": [
      "dev_games"
    ],
    "address": "127.0.0.1",
    "port": 9100
  }
}

配置文件说明:
id: 服务ID,可选提供项。若提供,则将其设置为name一致
name:服务名称,必须提供项。要求每个节点上的所有服务都有唯一的ID。
tags: 服务的标签,自定义的可选提供项,用于区分主节点和辅助节点。
address:地址字段,用于指定特定于服务的IP地址。默认情况下,使用agent的IP地址,因而不需要提供这个地址。 可以理解为服务注册到Consul使用的IP,服务发现是发现的此IP地址。
port: 可以简单理解为服务注册到Consul使用的端口,服务发现也是发现address对应的端口。

编辑完配置文件后,若是首次创建使用,需要重新启动Consul服务进行加载生效。Ctrl+c停掉终端已经启动的开发者模式consul agent服务,之后再重新启动Consul服务:

[root@localhost consul]# ./consul agent -dev -config-dir=/usr/local/consul/consul.d

2.4 通过HTTP API和DNS两种方式进行服务发现

通过HTTP API的方式在Consul主机中获取服务列表,操作如下:

[root@localhost ~]# curl http://localhost:8500/v1/catalog/service/node_exporter
[
    {
        "ID": "2d4c5e66-f00b-2ac9-0391-146993c69f0b",
        "Node": "localhost.localdomain",
        "Address": "127.0.0.1",
        "Datacenter": "dc1",
        "TaggedAddresses": {
            "lan": "127.0.0.1",
            "lan_ipv4": "127.0.0.1",
            "wan": "127.0.0.1",
            "wan_ipv4": "127.0.0.1"
        },
        "NodeMeta": {
            "consul-network-segment": ""
        },
        "ServiceKind": "",
        "ServiceID": "node_exporter",
        "ServiceName": "node_exporter",
        "ServiceTags": [
            "dev_games"
        ],
        "ServiceAddress": "192.168.1.20",
        "ServiceTaggedAddresses": {
            "lan_ipv4": {
                "Address": "192.168.1.20",
                "Port": 9100
            },
            "wan_ipv4": {
                "Address": "192.168.1.20",
                "Port": 9100
            }
        },
        "ServiceWeights": {
            "Passing": 1,
            "Warning": 1
        },
        "ServiceMeta": {},
        "ServicePort": 9100,
        "ServiceEnableTagOverride": false,
        "ServiceProxy": {
            "MeshGateway": {},
            "Expose": {}
        },
        "ServiceConnect": {},
        "CreateIndex": 12,
        "ModifyIndex": 12
    }
]

使用Consul提供的内置DNS服务访问当前集群中的节点信息,操作如下:(如果没有dig需要安装bind-utils)

[root@localhost consul.d]# dig @127.0.0.1 -p 8600 node_exporter.service.consul

; <<>> DiG 9.11.4-P2-RedHat-9.11.4-16.P2.el7_8.3 <<>> @127.0.0.1 -p 8600 node_exporter.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11982
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;node_exporter.service.consul.  IN  A

;; ANSWER SECTION:
node_exporter.service.consul. 0 IN  A   127.0.0.1

;; Query time: 33 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Tue May 19 20:36:22 EDT 2020
;; MSG SIZE  rcvd: 73

2.5 与prometheus集成

[root@localhost prometheus]# cat prometheus.yml
  - job_name: 'consul_sd_node_exporter'
    scheme: http
    consul_sd_configs:
      - server: 127.0.0.1:8500
        services: ['node_exporter']

配置文件说明:
consul_sd_node_exporter:指定prometheus是基于Consul的自动服务发现所使用的选项。

  • server:指定Consul服务地址,我这里是将Consul和prometheus安装在同一台主机上,所有这里使用127.0.0.1地址。
  • services:服务名称列表数组,指定当前需要发现哪种服务的信息。可以指定多服务名称,例如service:['node_exporter','mysqld_exporter'],如果不填写,默认获取Consul上注册的所有服务。

3 dns_sd_configs

DNS服务发现

3.1 配置hosts解析,有dns可以配置dns

[root@localhost ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.2.136 prometheus.tcp.com

3.2 配置 Prometheus

[root@localhost ~]# vi /usr/local/prometheus/prometheus.yml
  - job_name: 'dns_node_exporter'
    static_configs:
    - targets: ['prometheus.tcp.com:9100']

4 static_configs

  scrape_configs:
    - job_name: prometheus
      static_configs:
      - targets:
        - localhost:9090
正文完
 
mervinwang
版权声明:本站原创文章,由 mervinwang 2023-07-26发表,共计9843字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
文章搜索