Table data is split into tablets and managed by DocDB. By default, each tablet is synchronously replicated as per the replication factor, using the Raft algorithm across various nodes to ensure data consistency, fault tolerance, and high availability. The replication layer is a critical component that determines how data is replicated, synchronized, and made consistent across the distributed system. In this section, you can explore the key concepts and techniques used in the replication layer of YugabyteDB.

Raft consensus protocol

At the heart of YugabyteDB is Raft consensus protocol that ensures the replicated data remains consistent across all the nodes. Raft is designed to be a more understandable alternative to the complex Paxos protocol. It works by electing a leader node that is responsible for managing the replicated log and coordinating the other nodes.

To understand the different concepts in the consensus protocol, see Raft.

Synchronous Replication

YugabyteDB ensures that all writes are replicated to a majority of the nodes before the write is considered complete and acknowledged to the client. This provides the highest level of data consistency, as the data is guaranteed to be durable and available on multiple nodes. YugabyteDB's synchronous replication architecture is inspired by Google Spanner.

To understand how replication works, see Synchronous replication.

xCluster

Asynchronous replication, on the other hand, does not wait for writes to be replicated to all the nodes before acknowledging the client. Instead, writes are acknowledged immediately, and the replication process happens in the background. Asynchronous replication provides lower latency for write operations, as the client does not have to wait for the replication to complete. However, it comes with the trade-off of potentially lower consistency across universes, as there may be a delay before the replicas are fully synchronized.

In YugabyteDB, you can use xCluster to set up asynchronous replication between 2 different distant universes either in a unidirectional or bi-directional manner.

To understand how asynchronous replication between 2 universes works, see xCluster.

Read replica

Read replicas are effectively in-universe asynchronous replicas. It is a optional cluster that you can add on to an existing cluster which can help you improve read latency for users located far away from your primary cluster.

To understand how read replicas work, see Read replicas.

Change Data Capture (CDC)

CDC is a technique used to track and replicate changes to the data. CDC systems monitor the database's transaction log and capture any changes that occur. These changes are then propagated to external systems or replicas using connectors.

CDC is particularly beneficial in scenarios where real-time data synchronization is required, such as data warehousing, stream processing, and event-driven architectures. It allows the replicated data to be kept in sync without the need for full table replication, which can be more efficient and scalable.

To understand how CDC works, see CDC.