visit
$ kubectl get events
15m Warning FailedCreate replicaset/ml-pipeline-visualizationserver-865c7865bc
Error creating: pods "ml-pipeline-visualizationserver-865c7865bc-" is forbidden: error looking up service account default/default-editor: serviceaccount "default-editor" not found
However useful this information is, it is only temporary. The retention period is generally between 5 minutes and 30 days. You may want to keep this precious information for auditing purposes or ulterior diagnosis in more durable and efficient storage like . You can then react to certain events by having a tool (e.g. ) or your own applications subscribing to a Kafka topic.
In this article, I will show you how to build such a pipeline for processing and storing Kubernetes cluster events.
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: kube-events
spec:
entityOperator:
topicOperator: {}
userOperator: {}
kafka:
config:
default.replication.factor: 3
log.message.format.version: "2.6"
offsets.topic.replication.factor: 3
transaction.state.log.min.isr: 2
transaction.state.log.replication.factor: 3
listeners:
- name: plain
port: 9092
tls: false
type: internal
- name: tls
port: 9093
tls: true
type: internal
replicas: 3
storage:
type: jbod
volumes:
- deleteClaim: false
id: 0
size: 10Gi
type: persistent-claim
version: 2.6.0
zookeeper:
replicas: 3
storage:
deleteClaim: false
size: 10Gi
type: persistent-claim
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: cluster-events
spec:
config:
retention.ms: 7200000
segment.bytes: 1073741824
partitions: 1
replicas: 1
apiVersion: v1
data:
config.json: |-
{
"sink": "kafka",
"kafkaBrokers": "kube-events-kafka-bootstrap.kube-events.svc.cluster.local:9092",
"kafkaTopic": "cluster-events"
}
kind: ConfigMap
metadata:
name: eventrouter-cm
kubectl -n kube-events exec kube-events-kafka-0 -- bin/kafka-console-consumer.sh \
--bootstrap-server kube-events-kafka-bootstrap:9092 \
--topic kube-events \
--from-beginning
{"verb":"ADDED","event":{...}}
{"verb":"ADDED","event":{...}}
...
There’s no explicit need to create the new topics if the are configured for Kafka (they are by default) as Kafka will create topics for you implicitly. It’s a really cool feature from the Kafka client API.
Have a look at the full code here: There are of course more performant options, depending on the projected volume of events and complexity of the fanout logic. For a more robust implementation, a consumer making use of would be a great choice.apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: events-fanout
name: events-fanout
spec:
replicas: 1
selector:
matchLabels:
app: events-fanout
template:
metadata:
labels:
app: events-fanout
spec:
containers:
- image: emmsys/events-fanout:latest
name: events-fanout
command: [ "./events-fanout"]
args:
- -logLevel=info
env:
- name: ENDPOINT
value: kube-events-kafka-bootstrap:9092
- name: TOPIC
value: cluster-events
kubectl -n kube-events get kafkatopics.kafka.strimzi.io -o name
kafkatopic.kafka.strimzi.io/cluster-events
kafkatopic.kafka.strimzi.io/kube-system
kafkatopic.kafka.strimzi.io/default
kafkatopic.kafka.strimzi.io/kafka
kafkatopic.kafka.strimzi.io/kube-events
Alsopublished behind a paywall at