Joins are a fundamental concept in relational databases, used to query and combine data from multiple tables. Use joins to combine rows from two or more tables based on a related column. The related column is usually a foreign key that establishes a relationship between the tables. A join condition specifies how the rows from one table should be matched with the rows from another table. It can be defined in one of ON, USING, or WHERE clauses.

Although as a user you would write your query using one of the standard joins - Inner, Left outer, Right outer, Full outer, or Cross, the query planner will choose from one of many join strategies to execute the query and fetch results.

The query optimizer is responsible for determining the most efficient join strategy for a given query, to minimize the computational cost and improve overall performance. Knowing these strategies will help you understand the performance of your queries.

Setup

The examples run on any YugabyteDB universe.

Set up a local cluster

If a local universe is currently running, first destroy it.

Start a local one-node universe with an RF of 1 by first creating a single node, as follows:

./bin/yugabyted start \
                --advertise_address=127.0.0.1 \
                --base_dir=${HOME}/var/node1 \
                --cloud_location=aws.us-east-2.us-east-2a

After starting the yugabyted processes on all the nodes, configure the data placement constraint of the universe, as follows:

./bin/yugabyted configure data_placement --base_dir=${HOME}/var/node1 --fault_tolerance=zone

This command can be executed on any node where you already started YugabyteDB.

To check the status of a running multi-node universe, run the following command:

./bin/yugabyted status --base_dir=${HOME}/var/node1

Setup

To set up a universe, refer to Set up a YugabyteDB Anywhere universe.

Setup

To set up a cluster, refer to Set up a YugabyteDB Aeon cluster.

Sample schema

Suppose you work with a database that includes the following tables and indexes:

CREATE TABLE students (
    id int,
    name varchar(255),
    PRIMARY KEY(id)
);

CREATE TABLE scores (
    id int,
    subject varchar(100),
    score int,
    PRIMARY KEY(id, subject)
);

CREATE INDEX idx_name on students(name);
CREATE INDEX idx_id on scores(id);

Load some data into the tables by adding some students:

INSERT INTO students (id,name)
   SELECT n, (ARRAY['Natasha', 'Lisa', 'Mike', 'Michael', 'Anthony'])[floor(random() * 4 + 1)]
      FROM generate_series(1, 20) AS n;

Add some scores to the students as follows:

WITH subjects AS (
    SELECT unnest(ARRAY['English', 'History', 'Math', 'Spanish', 'Science']) AS subject
)
INSERT INTO scores (id, subject, score)
    SELECT id, subject, (40+random()*60)::int AS score
        FROM subjects CROSS JOIN students ORDER BY id;

Run the following command to ensure that the tables are analyzed for the optimizer:

analyze students, scores;
The following examples use hints to force the query planner to pick a specific join strategy. In practice, you don't need to specify a hint, as the planner automatically picks an appropriate strategy.

Nested loop join

Nested loop join is the simplest join algorithm. It involves iterating through each row of the first table and checking for matches in the second table based on the join condition.

Nested loop join has an outer loop and an inner loop. The outer loop iterates through the rows of the first (outer) table, while the inner loop iterates through the rows of the second (inner) table.

The worst-case time complexity is O(m*n), where m and n are the sizes of outer and inner tables respectively. Often used when one table is small and an index can be used.

This is also often the preferred join strategy, as it is the only join method to not require extra memory overhead, and it operates well in queries where the join clause has low selectivity. If you compare the peak memory usage of this join strategy with others, you will notice that it is the lowest.

To fetch all scores of students named Natasha who have scored more than 70 in any subject using nested loop join, you would execute the following:

explain (analyze, dist, costs off)
/*+
    set(enable_material false)
    nestloop(students scores)
    set(yb_enable_optimizer_statistics on)
*/
SELECT name, subject, score
      FROM students JOIN scores USING(id)
      WHERE name = 'Natasha' and score > 70;

The query plan would be similar to the following:

 Nested Loop (actual time=2002.483..9011.038 rows=18 loops=1)
   ->  Seq Scan on students (actual time=1001.369..1001.393 rows=8 loops=1)
         Storage Filter: ((name)::text = 'Natasha'::text)
         Storage Table Read Requests: 1
         Storage Table Read Execution Time: 1001.245 ms
   ->  Index Scan using scores_pkey on scores (actual time=1001.141..1001.146 rows=2 loops=8)
         Index Cond: (id = students.id)
         Storage Filter: (score > 70)
         Storage Table Read Requests: 1
         Storage Table Read Execution Time: 1001.050 ms
 Planning Time: 0.308 ms
 Execution Time: 9011.109 ms
 Storage Read Requests: 9
 Storage Read Execution Time: 9009.641 ms
 Storage Execution Time: 9009.641 ms
 Peak Memory Usage: 56 kB

Batched nested loop join (BNL)

In the case of nested loop joins, the inner table is accessed multiple times, once for each outer table row. This leads to multiple RPC requests across the different nodes in the cluster, making this join strategy very slow as the outer table gets larger.

To reduce the number of requests sent across the nodes during the nested loop join, YugabyteDB adds an optimization to batch multiple keys of the outer table into one RPC request. This batch size can be controlled using the YSQL configuration parameter yb_bnl_batch_size, which defaults to 1024 (the suggested value for this variable).

If yb_bnl_batch_size is greater than 1, the optimizer will try to adopt the batching optimization when other join strategies are not fit for the current query.

To enable the query planner's use of BNL, use the YSQL configuration parameter yb_enable_batchednl.

To fetch all scores of students named Natasha who have scored more than 70 in any subject using BNL, you would execute the following:

explain (analyze, dist, costs off)
/*+
    ybbatchednl(students scores)
    set(yb_bnl_batch_size 1024)
    set(yb_enable_optimizer_statistics on)
*/
SELECT name, subject, score
      FROM students JOIN scores USING(id)
      WHERE name = 'Natasha' and score > 70;

The query plan would be similar to the following:

 YB Batched Nested Loop Join (actual time=4004.255..4004.297 rows=18 loops=1)
   Join Filter: (students.id = scores.id)
   ->  Index Scan using idx_name on students (actual time=2001.861..2001.870 rows=8 loops=1)
         Index Cond: ((name)::text = 'Natasha'::text)
         Storage Table Read Requests: 1
         Storage Table Read Execution Time: 1001.078 ms
         Storage Index Read Requests: 1
         Storage Index Read Execution Time: 1000.589 ms
   ->  Index Scan using idx_id on scores (actual time=2002.290..2002.305 rows=18 loops=1)
         Index Cond: (id = ANY (ARRAY[students.id, $1, $2, ..., $1023]))
         Storage Filter: (score > 70)
         Storage Table Read Requests: 1
         Storage Table Read Execution Time: 1001.356 ms
         Storage Index Read Requests: 1
         Storage Index Read Execution Time: 1000.670 ms
 Planning Time: 0.631 ms
 Execution Time: 4004.541 ms
 Storage Read Requests: 4
 Storage Read Execution Time: 4003.693 ms
 Storage Execution Time: 4003.693 ms
 Peak Memory Usage: 476 kB

The key point to note is the index condition:

(id = ANY (ARRAY[students.id, $1, $2, ..., $1023]))

which led to loops=1 in the inner scan as the lookup was batched. In the case of nested loop join, the inner side looped 8 times. Such batching can drastically improve performance in many scenarios.

Merge join

In a merge join, both input tables must be sorted on the join key. It works by merging the sorted input streams based on the join condition.

Merge Joins are efficient when dealing with large datasets, and the join condition involves equality comparisons. If the tables are not already sorted, the optimizer will use sorting operations as part of the execution plan.

As the join key columns are now sorted, the executor can compare the two streams easily in O(n+m) time complexity. Merge joins are typically suitable when joining based on equality and are commonly used when the join keys are indexed or the data is naturally sorted.

To fetch all scores of students named Natasha who have scored more than 70 in any subject and use Merge join, you would execute the following:

explain (analyze, dist, costs off)
/*+
    mergejoin(students scores)
    set(yb_enable_optimizer_statistics on)
*/
SELECT name, subject, score
      FROM students JOIN scores USING(id)
      WHERE name = 'Natasha' and score > 70;

The query plan would be similar to the following:

 Merge Join (actual time=2002.906..2002.947 rows=18 loops=1)
   Merge Cond: (students.id = scores.id)
   ->  Sort (actual time=1001.443..1001.445 rows=8 loops=1)
         Sort Key: students.id
         Sort Method: quicksort  Memory: 25kB
         ->  Seq Scan on students (actual time=1001.418..1001.424 rows=8 loops=1)
               Storage Filter: ((name)::text = 'Natasha'::text)
               Storage Table Read Requests: 1
               Storage Table Read Execution Time: 1001.243 ms
   ->  Sort (actual time=1001.447..1001.455 rows=51 loops=1)
         Sort Key: scores.id
         Sort Method: quicksort  Memory: 27kB
         ->  Seq Scan on scores (actual time=1001.370..1001.390 rows=51 loops=1)
               Storage Filter: (score > 70)
               Storage Table Read Requests: 1
               Storage Table Read Execution Time: 1001.250 ms
 Planning Time: 0.252 ms
 Execution Time: 2003.039 ms
 Storage Read Requests: 2
 Storage Read Execution Time: 2002.493 ms
 Storage Execution Time: 2002.493 ms
 Peak Memory Usage: 126 kB

Hash join

The hash join algorithm starts by hashing the join key columns of both input tables. The hash function transforms the join key values into a hash code.

Hash joins are suitable for equi-joins (joins based on equality) and are effective when the join keys are not sorted or indexed.

To fetch all scores of students named Natasha who have scored more than 70 in any subject using hash join, you would execute the following:

explain (analyze, dist, costs off)
/*+
    hashjoin(students scores)
    set(yb_enable_optimizer_statistics on)
*/
SELECT name, subject, score
      FROM students JOIN scores USING(id)
      WHERE name = 'Natasha' and score > 70;

The query plan would be similar to the following:

 Hash Join (actual time=2002.431..2002.477 rows=18 loops=1)
   Hash Cond: (scores.id = students.id)
   ->  Seq Scan on scores (actual time=1001.043..1001.061 rows=51 loops=1)
         Storage Filter: (score > 70)
         Storage Table Read Requests: 1
         Storage Table Read Execution Time: 1000.927 ms
   ->  Hash (actual time=1001.360..1001.361 rows=8 loops=1)
         Buckets: 1024  Batches: 1  Memory Usage: 9kB
         ->  Seq Scan on students (actual time=1001.327..1001.335 rows=8 loops=1)
               Storage Filter: ((name)::text = 'Natasha'::text)
               Storage Table Read Requests: 1
               Storage Table Read Execution Time: 1001.211 ms
 Planning Time: 0.231 ms
 Execution Time: 2002.540 ms
 Storage Read Requests: 2
 Storage Read Execution Time: 2002.137 ms
 Storage Execution Time: 2002.137 ms
 Peak Memory Usage: 141 kB

Learn more