Hands-on Kafka Connect: Source to Sink in S3, GCS & Beyond

Why take this course?
π Course Title: Hands-on Kafka Connect: Source to Sink in S3, GCS & Beyond
Course Headline: Master Kafka Connect with Hands-On Experience! π
Dive deep into the world of Apache Kafka Connect with this comprehensive course. Designed for learners who aim to master Kafka Connect and its open-sourced connectors, this course offers a blend of theoretical knowledge and practical hands-on experience. Get ready to become an expert in integrating Kafka with various systems, handling data flows with ease, and understanding the intricacies of Kafka Connect architecture.
Course Description:
This course is entirely dedicated to Kafka Connect and exploring its open-sourced connectors. With plenty of connectors available in Kafka Connect, this course focuses on two key components: a sink connector and a source connector. π
** Module 1: Introduction to Kafka Connect π οΈ**
- Understanding Kafka Connect: Learn about the role of Kafka Connect in streamlining data flow between Kafka and different data sources/sinks.
Module 2: S3 Sink Connector Deep Dive π§
- What is an S3 Sink Connector? Discover how to install and configure the S3 sink connector in both Standalone and Distributed modes.
- Partitioner Class: Explore different partitioners with practical examples:
- Default Partitioner
- Time Based Partitioner
- Field Partitioner
- Integration with Schema Registry: Understand how to test schema evolution in BACKWARD compatibility mode.
- DLQ (Dead Letter Queue) Testing: Generate invalid records and see how Kafka handles them.
- Automating S3 Sink Connector Setup: Learn how to create the sink connector using a single Docker compose command.
Module 3: Kafka Connect Cluster Setup π©οΈ
- Provisioning Machines: Set up two machines from AWS for your Kafka connect cluster.
- Cluster Testing: Test load balancing and fault tolerance within the cluster.
Module 4: Debezium MySQL CDC Source Connector Mastery π
- Understanding Debezium CDC: Learn how Debezium captures database changes (CDC) internally.
- Starting the Debezium MySQL Connector: Configure and run the connector in Distributed mode using docker commands.
- Testing with DML and DDL Statements: Observe how event schema changes occur with insert, update, delete operations and schema history Kafka topics.
- Integration with Schema Registry: Test your setup by running DDL & DML statements after integrating with Schema Registry.
By the end of this course, you will have a solid understanding of Kafka Connect's architecture and its application in real-world scenarios. You'll be well-equipped to implement Kafka connectors for various systems, handle data integration tasks, and optimize your Kafka cluster's performance. π
Join us on this journey to become a Kafka Connect expert and unlock the full potential of your data infrastructure! Enroll now and start your learning path with hands-on experience that will set you apart in the field of big data and stream processing.
Loading charts...