Both the jdbc source and sink connectors support sourcing from or sinking to postgresql tables containing data stored as json or jsonb If the column on postgres is of type json then the jdbc sink connector will throw an error The jdbc source connector stores json or jsonb as string type in kafka.
BOOKINGS | Mia Laren
The jdbc source connector allows you to import data from any relational database with a jdbc driver into kafka topics
The jdbc sink connector works with many databases without requiring custom.
The jdbc source and sink connectors include the open source postgresql jdbc 4.0 driver to read from and write to a postgresql database server Because the jdbc 4.0 driver is included, no additional steps are necessary before running a connector to postgresql databases. The kafka connect jdbc source connector imports data from any relational database with a jdbc driver into an kafka topic The kafka connect jdbc sink connector exports data from kafka topics to any relational database with a jdbc driver.
Learn how to work with the connector's source code by reading our development and contribution guidelines For more information, check the documentation for the jdbc connector on the confluent.io website Questions related to the connector can be asked on community slack or the confluent platform google group. In my table, i have 100k+ records so, i tried partitioning the topic into 10 and i tried with tasks.max with 10 to speed up the loading process, which was much faster when compared to single partition
Can someone help me understand how the sink connector loads data into postgres?
For a full working example, i’d recommend the postgresql cdc connect example here, or the jdbc source / sink examples depending on which connector you’re using. Above will store data as text and expects column to be of type text in postgres