Jan 01, 1970
很久以前,我们所知的可观察性并不存在;我们所拥有的只是监控。当时,监控是一群人看着显示仪表板的屏幕。仪表板本身包含指标并且仅包含系统指标:主要是 CPU、内存和磁盘使用情况。出于这个原因,我们将从指标开始。
version: "3" services: fake-metrics: build: ./fake-metrics-generator #1 collector: image: otel/opentelemetry-collector:0.87.0 #2 environment: #3 - METRICS_HOST=fake-metrics - METRICS_PORT=5000 volumes: - ./config/collector/config.yml:/etc/otelcol/config.yaml:ro #4
receivers: #1 prometheus: #2 config: scrape_configs: #3 - job_name: fake-metrics #4 scrape_interval: 3s static_configs: - targets: [ "${env:METRICS_HOST}:${env:METRICS_PORT}" ] exporters: #5 logging: #6 loglevel: debug service: pipelines: #7 metrics: #8 receivers: [ "prometheus" ] #9 exporters: [ "logging" ] #9
prometheus
预定义的接收器prometheus
接收器获取数据并发送给logging
导出器,即打印出来
2024-11-11 08:28:54 otel-collector-collector-1 | StartTimestamp: 1971-01-01 00:00:00 +0000 UTC 2024-11-11 08:28:54 otel-collector-collector-1 | Timestamp: 2024-11-11 07:28:54.14 +0000 UTC 2024-11-11 08:28:54 otel-collector-collector-1 | Value: 83.090000 2024-11-11 08:28:54 otel-collector-collector-1 | NumberDataPoints #1 2024-11-11 08:28:54 otel-collector-collector-1 | Data point attributes: 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__embrace_world_class_systems: Str(concept) 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__exploit_magnetic_applications: Str(concept) 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__facilitate_wireless_architectures: Str(extranet) 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__grow_magnetic_communities: Str(challenge) 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__reinvent_revolutionary_applications: Str(support) 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__strategize_strategic_initiatives: Str(internet_solution) 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__target_customized_eyeballs: Str(concept) 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__transform_turn_key_technologies: Str(framework) 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__whiteboard_innovative_partnerships: Str(matrices) 2024-11-11 08:28:54 otel-collector-collector-1 | StartTimestamp: 1971-01-01 00:00:00 +0000 UTC 2024-11-11 08:28:54 otel-collector-collector-1 | Timestamp: 2024-11-11 07:28:54.14 +0000 UTC 2024-11-11 08:28:54 otel-collector-collector-1 | Value: 53.090000 2024-11-11 08:28:54 otel-collector-collector-1 | NumberDataPoints #2 2024-11-11 08:28:54 otel-collector-collector-1 | Data point attributes: 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__expedite_distributed_partnerships: Str(approach) 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__facilitate_wireless_architectures: Str(graphical_user_interface) 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__grow_magnetic_communities: Str(policy) 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__reinvent_revolutionary_applications: Str(algorithm) 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__transform_turn_key_technologies: Str(framework) 2024-11-11 08:28:54 otel-collector-collector-1 | StartTimestamp: 1971-01-01 00:00:00 +0000 UTC 2024-11-11 08:28:54 otel-collector-collector-1 | Timestamp: 2024-11-11 07:28:54.14 +0000 UTC 2024-11-11 08:28:54 otel-collector-collector-1 | Value: 16.440000 2024-11-11 08:28:54 otel-collector-collector-1 | NumberDataPoints #3 2024-11-11 08:28:54 otel-collector-collector-1 | Data point attributes: 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__exploit_magnetic_applications: Str(concept) 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__grow_magnetic_communities: Str(graphical_user_interface) 2024-11-11 08:28:54 otel-collector-collector-1 | -> fake__target_customized_eyeballs: Str(extranet)
exporters: prometheus: #1 endpoint: ":${env:PROMETHEUS_PORT}" #2 service: pipelines: metrics: receivers: [ "prometheus" ] exporters: [ "prometheus" ] #3
prometheus
导出器
exporters: prometheus: #1 endpoint: ":${env:PROMETHEUS_PORT}" logging: #2 loglevel: debug service: pipelines: metrics: receivers: [ "prometheus" ] exporters: [ "prometheus", "logging" ] #3
请注意,接收者和导出者指定其类型,并且每个类型都必须是唯一的。为了满足最后一个要求,我们可以附加一个限定符来区分它们,即prometheus/foo
和prometheus/bar.
您可以在配置文件的processors
部分中声明数据处理器。收集器按照声明的顺序执行它们。让我们来实现上面的转换。
collector: image: otel/opentelemetry-collector-contrib:0.87.0 #1 environment: - METRICS_HOST=fake-metrics - METRICS_PORT=5000 - PROMETHEUS_PORT=8889 volumes: - ./config/collector/config.yml:/etc/otelcol-contrib/config.yaml:ro #2
contrib
风味
processors: metricstransform: #1 transforms: #2 - include: ^fake_(.*)$ #3 match_type: regexp #3 action: update operations: #4 - action: add_label #5 new_label: origin new_value: fake - include: ^fake_(.*)$ match_type: regexp action: update #6 new_name: $${1} #6-7 # Do the same with metrics generated by NodeJS
$${x}
service: pipelines: metrics: receivers: [ "prometheus" ] processors: [ "metricstransform" ] exporters: [ "prometheus" ]
连接器既是接收器又是输出器,连接两条管道。文档中的示例接收跨度数(跟踪)并导出具有指标的计数。我试图以 500 个错误实现相同的目标 - 剧透:它没有按预期工作。
receivers: filelog: include: [ "/var/logs/generated.log" ]
connectors: count: requests.errors: description: Number of 500 errors condition: [ "status == 500 " ]
service: pipelines: logs: receivers: [ "filelog" ] exporters: [ "count" ] metrics: receivers: [ "prometheus", "count" ]
该指标名为log_record_count_total
,但其值仍为 1。
receivers: filelog: include: [ "/var/logs/generated.log" ] operators: - type: json_parser #1 timestamp: #2 parse_from: attributes.datetime #3 layout: "%d/%b/%Y:%H:%M:%S %z" #4 severity: #2 parse_from: attributes.status #3 mapping: #5 error: 5xx #6 warn: 4xx info: 3xx debug: 2xx - id: remove_body #7 type: remove field: body - id: remove_datetime #7 type: remove field: attributes.datetime - id: remove_status #7 type: remove field: attributes.status
501-599
。该运算符对 HTTP 状态有一个特殊的解释值5xx
(及类似值)。
exporters: loki: endpoint: "//loki:3100/loki/api/v1/push"
service: telemetry: logs:
service: pipelines: logs: receivers: [ "filelog" ] exporters: [ "loki" ]
更进一步:
最初于 2023 年 11 月 12 日于