{"id":1318,"date":"2021-02-16T09:57:10","date_gmt":"2021-02-16T09:57:10","guid":{"rendered":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/?p=1318"},"modified":"2021-02-16T09:57:10","modified_gmt":"2021-02-16T09:57:10","slug":"database-changes-part-2","status":"publish","type":"post","link":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/","title":{"rendered":"Capturing and Streaming Database Changes with Debezium and Apache Kafka (Part 2) \u2013 Example"},"content":{"rendered":"\n<p>The first part of the series presented the technical options that can be used to capture changes in databases, and how the Debezium tool can be used in combination with the Apache Kafka platform to stream such change events and provide them to other applications.<\/p>\n\n\n\n<p>We are now going to develop, step by step, a small prototype that demonstrates the operating principle of Debezium. The architecture is structured as follows: There is a database with a single table labeled CdcDemo on a local instance of an SQL server. This table only contains a small number of datasets. We now install one instance each of Apache Kafka and Kafka Connect. Later, a topic for change events will be created in the Kafka instance, while Kafka Connect contains the SQL Server connector of Debezium. In the end, the change data from the topic will be read by two applications and displayed on the console in a simplified form. We use two consumers to show that the messages from Debezium can also be processed in parallel.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blogs.zeiss.com\/digital-innovation\/de\/wp-content\/uploads\/sites\/2\/2021\/02\/202102_Datenbankaenderungen_Teil_2_1-1024x242.jpg\" alt=\"Architecture of the prototype\" class=\"wp-image-2186\" width=\"768\" height=\"182\" \/><figcaption><em>Figure 1: Architecture of the prototype<\/em><\/figcaption><\/figure>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Preparing the SQL Server<\/h2>\n\n\n\n<p>First of all, we have to lay the foundations for a demonstration of the change data capture, i.e. we have to create a small database. For this purpose, we use the following command to create a table in a database in a local SQL Server instance:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">CREATE TABLE CdcDemo (\n\tId INT PRIMARY KEY,\n\tSurname VARCHAR(50) NULL,\n\tForename VARCHAR(50) NULL\n)<\/pre>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p>We can now add any number of datasets to the table. In the example, they are the names of famous writers.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td>Id<\/td><td>Surname<\/td><td>Forename<\/td><\/tr><tr><td>101<\/td><td>Lindgren<\/td><td>Astrid<\/td><\/tr><tr><td>102<\/td><td>King<\/td><td>Stephen<\/td><\/tr><tr><td>103<\/td><td>K\u00e4stner<\/td><td>Erich<\/td><\/tr><tr><td>&#8230;<\/td><td>&#8230;<\/td><td>&#8230;<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p>Next, we have to make the preparations that are specific to the Debezium connector. In the case of the SQL Server, this means that both the database and the table have to be activated for the change data capture. This is done by executing the following two system procedures:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">EXEC sys.sp_cdc_enable_db\n\nEXEC sys.sp_cdc_enable_table\n\t@source_schema = N\u2019dbo\u2019,\n\t@source_name = N\u2019CdcDemo\u2019,\n\t@role_name = N\u2019CdcRole\u2019<\/pre>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p>In the present example, dbo is the schema of the table labeled <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">CdcDemo<\/code>. The change data are accessed by way of the role <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">CdcRole<\/code>.<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Setting up Kafka and Debezium<\/h2>\n\n\n\n<p>When the preparations in the SQL Server have been completed, we can set up the infrastructure required for Debezium. It is advisable to first check that a current version of Java Runtime is installed.<\/p>\n\n\n\n<p>We can now download the Apache Kafka software from the official download site and unzip it to whichever folder we want. Installation is not required. Then, we have to download the SQL Server connector of Debezium and unzip it to a folder as well.<\/p>\n\n\n\n<p>Once Apache Kafka and Debezium have successfully been downloaded, we can create a new configuration for the connector by means of a Java properties file:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">name=srcsys1-connector\nconnector.class=io.debezium.connector.sqlserver.SqlServerConnector\ndatabase.hostname=123.123.123.123\ndatabase.port=1433\ndatabase.user=cdc-demo-user\ndatabase.password=cdc-demo-password\ndatabase.dbname=cdc-demo-db\ndatabase.server.name=srcsys1\ntable.whitelist=dbo.CdcDemo\ndatabase.history.kafka.bootstrap.servers=localhost:9092\ndatabase.history.kafka.topic=dbhistory.srcsys1<\/pre>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p>The <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">connector.class<\/code> setting is of particular importance. It tells Kafka Connect which of the downloaded, executable connectors is to be used. The class names of the Debezium connectors can be found in the respective documentation. Furthermore, <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">database.server.name<\/code> determines the logical name that Debezium uses for the database. This is important for the designation of the topic in Kafka later. By means of the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">table.whitelist<\/code> configuration, we can specify all the tables that the Debezium connectors are expected to monitor. All the other parameters are explained in the Debezium documentation.<\/p>\n\n\n\n<p>Next, we have to adapt the configuration file of Kafka Connect, which is located in the <em>config <\/em>folder of the Kafka installation. Since we only need one instance for the present example, we have to use the file<code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">connect-standalone.properties<\/code>. In principle, we can keep the default settings in this case. We only have to indicate the path to the downloaded Debezium connector for the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">plugin.path<\/code> property. Please note: This does not mean the path to the JAR files, but to the folder above them in the hierarchy, because Kafka Connect can also simultaneously execute several connectors located in this folder.<\/p>\n\n\n\n<p>For Apache Kafka itself, a small modification in the configuration file <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">server.properties<\/code> is useful. As two consumers are supposed to process the Debezium message in the end, it is expedient to increase the number of partitions for a topic to two. This way, the change events are written either to the first or the second partition. Each partition is then allocated to a consumer, ensuring that the messages are processed in parallel, but not twice. To implement this, we enter the number 2 in the <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">num.partitions<\/code> parameter.<\/p>\n\n\n\n<p>Now that all the components involved have been configured, we can start the instances. Following the correct sequence is important.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">$ .\/bin\/zookeeper-server-start.sh config\/zookeeper.properties\n$ .\/bin\/kafka-server-start.sh config\/server.properties\n$ .\/bin\/connect-standalone.sh config\/connect-standalone.\n\tproperties &lt;path_to_debezium_config&gt;<\/pre>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p>Firstly, we start ZooKeeper, which is responsible for the management of the Kafka instances. Then, a Kafka server is executed and registers with ZooKeeper. Lastly, we start Kafka Connect together with the Debezium connector.<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Implementing a consumer<\/h2>\n\n\n\n<p>The infrastructure for Debezium is now complete. All we need is a consumer that can process the messages from Kafka. As an example, we program a .NET Core console application using the Confluent.Kafka library, based on the introductory example of the library on GitHub. In addition, there is another method to briefly and succinctly present the JSON messages read from Kafka.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">using Confluent.Kafka;\nusing Newtonsoft.Json.Linq;\nusing System;\nusing System.Threading;\n\nnamespace StreamKafka\n{\n    class Program\n    {\n        static void Main(string[] args)\n        {\n            var config = new ConsumerConfig\n            {\n                GroupId = \"streamer-group\",\n                BootstrapServers = \"localhost:9092\",\n                AutoOffsetReset = AutoOffsetReset.Earliest,\n            };\n\n            using (var consumer = new ConsumerBuilder&lt;Ignore, string&gt;(config).Build())\n            {\n                consumer.Subscribe(\"srcsys1.dbo.CdcDemo\");\n\n                CancellationTokenSource cts = new CancellationTokenSource();\n                Console.CancelKeyPress += (_, e) =&gt;\n                {\n                    e.Cancel = true;\n                    cts.Cancel();\n                };\n\n                try\n                {\n                    while (true)\n                    {\n                        try\n                        {\n                            var consumeResult = consumer.Consume(cts.Token);\n\n                            if (consumeResult.Message.Value != null)\n                                Console.WriteLine($\"[{consumeResult.TopicPartitionOffset}]  \" + ProcessMessage(consumeResult.Message.Value));\n                        }\n                        catch (ConsumeException e)\n                        {\n                            Console.WriteLine($\"Error occured: {e.Error.Reason}\");\n                        }\n                    }\n                }\n                catch (OperationCanceledException)\n                {\n                    consumer.Close();\n                }\n            }\n\n\n        }\n\n        static string ProcessMessage(string jsonString)\n        {\n            var jsonObject = JObject.Parse(jsonString);\n            var payload = jsonObject[\"payload\"];\n\n            string returnString = \"\";\n\n            char operation = payload[\"op\"].ToString()[0];\n\n            switch (operation)\n            {\n                case 'c':\n                    returnString += \"INSERT: \";\n                    returnString += $\"{payload[\"after\"][\"Id\"]} | {payload[\"after\"][\"Nachname\"]} | {payload[\"after\"][\"Vorname\"]}\";\n                    break;\n\n                case 'd':\n                    returnString += \"DELETE: \";\n                    returnString += $\"{payload[\"before\"][\"Id\"]} | {payload[\"before\"][\"Nachname\"]} | {payload[\"before\"][\"Vorname\"]}\";\n                    break;\n\n                case 'u':\n                    returnString += \"UPDATE: \";\n                    returnString += $\"{payload[\"before\"][\"Id\"]} | {payload[\"before\"][\"Nachname\"]} | {payload[\"before\"][\"Vorname\"]} --&gt; \" +\n                        $\"{payload[\"after\"][\"Id\"]} | {payload[\"after\"][\"Nachname\"]} | {payload[\"after\"][\"Vorname\"]}\";\n                    break;\n\n                default:\n                    returnString += $\"{payload[\"after\"][\"Id\"]} | {payload[\"after\"][\"Nachname\"]} | {payload[\"after\"][\"Vorname\"]}\";\n                    break;\n            }\n\n            return returnString;\n        }\n    }\n}<\/pre>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p>Some points in the source text are interesting: Firstly, a configuration is defined for the consumer. It includes a <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">GroupId<\/code>, that is represented by a string of characters. The group is used to split the work between the consumers because if applications are in the same group, they do not process messages twice. The consumer then subscribes to the topic <code data-enlighter-language=\"generic\" class=\"EnlighterJSRAW\">srcsys1.dbo.CdcDemo<\/code> that has previously been automatically created in Kafka by Debezium. The name of the topic results from the parameters for the server and the table specified in the Debezium configuration. Subsequently, the consumer goes into an infinite loop of reading, processing and outputting messages.<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Testing the prototype<\/h2>\n\n\n\n<p>All the components required for this prototype have now been installed, configured, and implemented. Time to test the prototype. It is advisable to first start two instances of the implemented consumer and then execute Kafka and Debezium as described above.<\/p>\n\n\n\n<p>Once all the components are up and running, the Debezium connector takes a snapshot of the database table and writes these messages to Kafka, where the two consumers are already waiting. They are supposed to produce output that resembles the image below.<\/p>\n\n\n\n<figure class=\"wp-block-image size-medium\"><img decoding=\"async\" src=\"https:\/\/blogs.zeiss.com\/digital-innovation\/de\/wp-content\/uploads\/sites\/2\/2021\/01\/202101_Datenbankaenderungen_Teil2_2-600x422.png\" alt=\"The consumers give out the snapshot of the database table\" class=\"wp-image-2031\" \/><figcaption><em>Figure 2: The consumers give out the snapshot of the database table<\/em><\/figcaption><\/figure>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p>A word on the significance of the output: The information in the square brackets preceding the actual dataset provide details about the topic, the partition number and the log number of the respective message. You can see that each consumer only deals with the messages of one partition. Debezium decides which partition a dataset is allocated to by means of hashing and modulo calculation of the primary key.<\/p>\n\n\n\n<p>We can now test how Debezium responds to changes in the table. We can execute INSERT, UPDATE and DELETE commands on the database by means of the SQL Server Management Studio. Shortly after a statement has been issued, the consumers should respond and produce corresponding output. After executing a few DML commands, the output could look like this:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blogs.zeiss.com\/digital-innovation\/de\/wp-content\/uploads\/sites\/2\/2021\/01\/202101_Datenbankaenderungen_Teil2_3.png\" alt=\"Console output of the consumers following several changes in the table\" class=\"wp-image-2029\" width=\"693\" height=\"333\" \/><figcaption><em>Figure 3: Console output of the consumers following several changes in the table<\/em><\/figcaption><\/figure>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p>One last question that needs to be answered: Can the partitioning of the messages cause race conditions? In other words, could changes in the same dataset across both partitions \u201covertake\u201d each other, causing them to be processed in the wrong order? The answer is no. Fortunately, Debezium has already considered this possibility. As the change events are allocated to their respective partition based on their primary key as described above, change data referring to the same dataset always end up in the same partition, one after the other, where they are processed by one consumer in the correct order.<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>The example shows that using Debezium in combination with Apache Kafka to stream database changes is relatively simple. Insert, update and delete commands can be processed in near real time. In addition to the examples shown in this prototype, it is also possible to stream changes in the data schema. For this purpose, Debezium creates a separate topic in Kafka.<\/p>\n\n\n\n<p>Please note that the prototype presented here is a minimal example. To put Debezium to productive use, the respective components need to be scaled in order to ensure a certain level of fault tolerance and reliability.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This article uses an example to show how relational databases can be better managed with Debezium and Apache Kafka.<\/p>\n","protected":false},"author":110,"featured_media":1321,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"advgb_blocks_editor_width":"","advgb_blocks_columns_visual_guide":"","footnotes":""},"categories":[12,13],"tags":[586,646,648,649,653,656,657,658,659,660],"topics":[654],"class_list":["post-1318","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-java","category-dot-net","tag-database","tag-debezium","tag-apache-kafka","tag-kafka","tag-change-data-capture","tag-relational-databases","tag-streaming-platform","tag-change-events-2","tag-data-changes","tag-database-changes","topics-database-changes-with-debezium-and-apache-kafka"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.0 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Capturing and Streaming Database ... - ZEISS Digital Innovation Blog<\/title>\n<meta name=\"description\" content=\"This article uses an example to show how relational databases can be better managed with Debezium and Apache Kafka.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Capturing and Streaming Database ... - ZEISS Digital Innovation Blog\" \/>\n<meta property=\"og:description\" content=\"This article uses an example to show how relational databases can be better managed with Debezium and Apache Kafka.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/\" \/>\n<meta property=\"og:site_name\" content=\"Digital Innovation Blog\" \/>\n<meta property=\"article:published_time\" content=\"2021-02-16T09:57:10+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blogs.zeiss.com\/digital-innovation\/de\/wp-content\/uploads\/sites\/2\/2021\/02\/202102_Datenbankaenderungen_Teil_2_1_fi.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2000\" \/>\n\t<meta property=\"og:image:height\" content=\"1125\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Richard Mogwitz\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Richard Mogwitz\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/\",\"url\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/\",\"name\":\"Capturing and Streaming Database ... - ZEISS Digital Innovation Blog\",\"isPartOf\":{\"@id\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/de\/wp-content\/uploads\/sites\/2\/2021\/02\/202102_Datenbankaenderungen_Teil_2_1_fi.jpg\",\"datePublished\":\"2021-02-16T09:57:10+00:00\",\"dateModified\":\"2021-02-16T09:57:10+00:00\",\"author\":{\"@id\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/#\/schema\/person\/b98a549c93ed1b935fc194d097251461\"},\"description\":\"This article uses an example to show how relational databases can be better managed with Debezium and Apache Kafka.\",\"breadcrumb\":{\"@id\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/#primaryimage\",\"url\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/de\/wp-content\/uploads\/sites\/2\/2021\/02\/202102_Datenbankaenderungen_Teil_2_1_fi.jpg\",\"contentUrl\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/de\/wp-content\/uploads\/sites\/2\/2021\/02\/202102_Datenbankaenderungen_Teil_2_1_fi.jpg\",\"width\":2000,\"height\":1125,\"caption\":\"Architektur des Prototypen\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Capturing and Streaming Database Changes with Debezium and Apache Kafka (Part 2) \u2013 Example\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/#website\",\"url\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/\",\"name\":\"Digital Innovation Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/#\/schema\/person\/b98a549c93ed1b935fc194d097251461\",\"name\":\"Richard Mogwitz\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/wp-content\/uploads\/sites\/3\/2021\/02\/mogwitz_richard-e1612343373674-150x150.jpg\",\"contentUrl\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/wp-content\/uploads\/sites\/3\/2021\/02\/mogwitz_richard-e1612343373674-150x150.jpg\",\"caption\":\"Richard Mogwitz\"},\"description\":\"Richard Mogwitz is studying Applied Computer Science at Dresden University of Applied Sciences and has been working as a student trainee at ZEISS Digital Innovation since 2019. He is mainly involved in the development of .NET applications, but also in programming web applications with Blazor and Angular.\",\"url\":\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/author\/enrichardmogwitz\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Capturing and Streaming Database ... - ZEISS Digital Innovation Blog","description":"This article uses an example to show how relational databases can be better managed with Debezium and Apache Kafka.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/","og_locale":"en_US","og_type":"article","og_title":"Capturing and Streaming Database ... - ZEISS Digital Innovation Blog","og_description":"This article uses an example to show how relational databases can be better managed with Debezium and Apache Kafka.","og_url":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/","og_site_name":"Digital Innovation Blog","article_published_time":"2021-02-16T09:57:10+00:00","og_image":[{"width":2000,"height":1125,"url":"https:\/\/blogs.zeiss.com\/digital-innovation\/de\/wp-content\/uploads\/sites\/2\/2021\/02\/202102_Datenbankaenderungen_Teil_2_1_fi.jpg","type":"image\/jpeg"}],"author":"Richard Mogwitz","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Richard Mogwitz","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/","url":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/","name":"Capturing and Streaming Database ... - ZEISS Digital Innovation Blog","isPartOf":{"@id":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/#primaryimage"},"image":{"@id":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/#primaryimage"},"thumbnailUrl":"https:\/\/blogs.zeiss.com\/digital-innovation\/de\/wp-content\/uploads\/sites\/2\/2021\/02\/202102_Datenbankaenderungen_Teil_2_1_fi.jpg","datePublished":"2021-02-16T09:57:10+00:00","dateModified":"2021-02-16T09:57:10+00:00","author":{"@id":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/#\/schema\/person\/b98a549c93ed1b935fc194d097251461"},"description":"This article uses an example to show how relational databases can be better managed with Debezium and Apache Kafka.","breadcrumb":{"@id":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/#primaryimage","url":"https:\/\/blogs.zeiss.com\/digital-innovation\/de\/wp-content\/uploads\/sites\/2\/2021\/02\/202102_Datenbankaenderungen_Teil_2_1_fi.jpg","contentUrl":"https:\/\/blogs.zeiss.com\/digital-innovation\/de\/wp-content\/uploads\/sites\/2\/2021\/02\/202102_Datenbankaenderungen_Teil_2_1_fi.jpg","width":2000,"height":1125,"caption":"Architektur des Prototypen"},{"@type":"BreadcrumbList","@id":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/database-changes-part-2\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/"},{"@type":"ListItem","position":2,"name":"Capturing and Streaming Database Changes with Debezium and Apache Kafka (Part 2) \u2013 Example"}]},{"@type":"WebSite","@id":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/#website","url":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/","name":"Digital Innovation Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/#\/schema\/person\/b98a549c93ed1b935fc194d097251461","name":"Richard Mogwitz","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/#\/schema\/person\/image\/","url":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/wp-content\/uploads\/sites\/3\/2021\/02\/mogwitz_richard-e1612343373674-150x150.jpg","contentUrl":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/wp-content\/uploads\/sites\/3\/2021\/02\/mogwitz_richard-e1612343373674-150x150.jpg","caption":"Richard Mogwitz"},"description":"Richard Mogwitz is studying Applied Computer Science at Dresden University of Applied Sciences and has been working as a student trainee at ZEISS Digital Innovation since 2019. He is mainly involved in the development of .NET applications, but also in programming web applications with Blazor and Angular.","url":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/author\/enrichardmogwitz\/"}]}},"author_meta":{"display_name":"Richard Mogwitz","author_link":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/author\/enrichardmogwitz\/"},"featured_img":"https:\/\/blogs.zeiss.com\/digital-innovation\/de\/wp-content\/uploads\/sites\/2\/2021\/02\/202102_Datenbankaenderungen_Teil_2_1_fi-600x338.jpg","coauthors":[],"tax_additional":{"categories":{"linked":["<a href=\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/category\/java\/\" class=\"advgb-post-tax-term\">Java<\/a>","<a href=\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/category\/dot-net\/\" class=\"advgb-post-tax-term\">.NET<\/a>"],"unlinked":["<span class=\"advgb-post-tax-term\">Java<\/span>","<span class=\"advgb-post-tax-term\">.NET<\/span>"]},"tags":{"linked":["<a href=\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/category\/dot-net\/\" class=\"advgb-post-tax-term\">database<\/a>","<a href=\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/category\/dot-net\/\" class=\"advgb-post-tax-term\">Debezium<\/a>","<a href=\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/category\/dot-net\/\" class=\"advgb-post-tax-term\">Apache Kafka<\/a>","<a href=\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/category\/dot-net\/\" class=\"advgb-post-tax-term\">Kafka<\/a>","<a href=\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/category\/dot-net\/\" class=\"advgb-post-tax-term\">Change Data Capture<\/a>","<a href=\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/category\/dot-net\/\" class=\"advgb-post-tax-term\">relational databases<\/a>","<a href=\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/category\/dot-net\/\" class=\"advgb-post-tax-term\">streaming platform<\/a>","<a href=\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/category\/dot-net\/\" class=\"advgb-post-tax-term\">change events<\/a>","<a href=\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/category\/dot-net\/\" class=\"advgb-post-tax-term\">data changes<\/a>","<a href=\"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/category\/dot-net\/\" class=\"advgb-post-tax-term\">database changes<\/a>"],"unlinked":["<span class=\"advgb-post-tax-term\">database<\/span>","<span class=\"advgb-post-tax-term\">Debezium<\/span>","<span class=\"advgb-post-tax-term\">Apache Kafka<\/span>","<span class=\"advgb-post-tax-term\">Kafka<\/span>","<span class=\"advgb-post-tax-term\">Change Data Capture<\/span>","<span class=\"advgb-post-tax-term\">relational databases<\/span>","<span class=\"advgb-post-tax-term\">streaming platform<\/span>","<span class=\"advgb-post-tax-term\">change events<\/span>","<span class=\"advgb-post-tax-term\">data changes<\/span>","<span class=\"advgb-post-tax-term\">database changes<\/span>"]}},"comment_count":"0","relative_dates":{"created":"Posted 5 years ago","modified":"Updated 5 years ago"},"absolute_dates":{"created":"Posted on February 16, 2021","modified":"Updated on February 16, 2021"},"absolute_dates_time":{"created":"Posted on February 16, 2021 9:57 am","modified":"Updated on February 16, 2021 9:57 am"},"featured_img_caption":"","series_order":"","_links":{"self":[{"href":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/wp-json\/wp\/v2\/posts\/1318","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/wp-json\/wp\/v2\/users\/110"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/wp-json\/wp\/v2\/comments?post=1318"}],"version-history":[{"count":6,"href":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/wp-json\/wp\/v2\/posts\/1318\/revisions"}],"predecessor-version":[{"id":1326,"href":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/wp-json\/wp\/v2\/posts\/1318\/revisions\/1326"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/wp-json\/wp\/v2\/media\/1321"}],"wp:attachment":[{"href":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/wp-json\/wp\/v2\/media?parent=1318"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/wp-json\/wp\/v2\/categories?post=1318"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/wp-json\/wp\/v2\/tags?post=1318"},{"taxonomy":"topics","embeddable":true,"href":"https:\/\/blogs.zeiss.com\/digital-innovation\/en\/wp-json\/wp\/v2\/topics?post=1318"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}