In this workshop, we'll take a look at simple deployment options for setting up and bringing up a basic Kafka cluster. We'll also dive into monitoring solutions for a Kafka Cluster and discuss important metrics and configuring alerts and notifications.
The breakdown of the workshop will be as follows:
We'll look at a dockerized solution and create a Dockerfile that deploys a docker container for Kafka. We'll look at simple Dashboard provided by Lenses.We'll also walk through sample terraform and ansible scripts that deploy a Kafka cluster as EC2 instances in AWS. Finally we'll configure Kafka and zookeeper settings for systemd service configurations and look at example Kafka-to-S3 connector that is deployed in AWS that integrates with S3 Buckets.
For an overall monitoring solution, we'll start by looking at Kafka logging and discuss the different logs that are configured with a simple deployment. We'll look at broker logs, zookeeper logs, server logs, connect logs, and discuss how these logs can be made available to support teams. Additional topics covered in this section include:
- Monitoring each component of a Kafka Cluster: service, application, data flows, brokers, zookeeper, schema reg, connect, network monitoring, System - cpu, disk space, memory
- JMX_Exporter and Kafka metrics reported by JMX_Exporter
- Sample Kafka dashboards provided by Grafana
- Splunk Connect for Kafka - a sink connector that allows a Splunk software administrator to subscribe to a Kafka topic and stream the data to the Splunk HTTP event collector
- Elasticsearch Metricbeat Kafka module and how it integrates with Jolokia
We'll look at configuring Kafka related alerts related to: InSync Replicas, Partitions, Broker Connection, Zookeeper Connection, Health Status, latency, bandwidth, throughput, consumer lag, and preventing loss of messages in production. Finally we'll wrap up by discussing notification options including email, Slack, Service-Now, and PagerDuty.