Read replicas
Applications running in other regions incur cross-region latency to read the latest data from the leader. If a little staleness for reads is acceptable for the applications running in the other regions, then Read Replicas is the pattern to adopt.
A read replica cluster is a set of follower nodes connected to a primary cluster. These are purely observer nodes, which means that they don't take part in the Raft consensus and elections. As a result, read replicas can have a different replication factor (RF) than the primary cluster, and you can have an even number of replicas.
Let's look into how this can be beneficial for your application.
Setup
Setup
To set up a local universe, refer to Set up a local YugabyteDB universe.Setup
To set up a cluster, refer to Set up a YugabyteDB Aeon cluster.Setup
To set up a universe, refer to Set up a YugabyteDB Anywhere universe.Suppose you have an RF 3 cluster set up in us-east-1
and us-east-2
, with leader preference set to us-east-1
. And suppose you want to run other applications in us-central
and us-west
. The read latencies would be similar to the following illustration.
Improve read latencies
To improve read latencies, set up separate Read Replica clusters in each of the regions where you want to run your application and where a little staleness is acceptable.
This enables the application to read data from the closest replica instead of going cross-region to the tablet leaders.
Notice that the read latency for the application in us-west
has drastically dropped to 2 ms from the initial 60 ms and the read latency of the application in us-central
has also dropped to 2 ms.
As replicas may not be up-to-date with all updates, by design, this might return slightly stale data (the default is 30 seconds, but this can be configured).
Failover
When the read replicas in a region fail, the application will redirect its read to the next closest read replica or leader.
Notice how the application in us-west
reads from the follower in us-central
when the read replicas in us-west
fail. In this case, the read latency is 40 ms, still much less than the original 60 ms.