Why transaction order matters, even if you’re only reading

There are 4 isolation levels defined by the SQL standard, and Postgres supports them through the SET TRANSACTION statement. They are:

  • Read uncommitted: It’s possible to read data written by another concurrent uncommitted transaction (this is called a “dirty read”). (Note: this isolation level is not really implemented by Postgres)
  • Read committed: Dirty reads are not possible.
  • Repeatable read: “Nonrepeatable reads” are not possible. A nonrepeatable read is when you do a read once, then you do the same read again and see that data has been modified by another concurrent transaction.
  • Serializable: The strongest isolation level, this guarantees that repeating a read within a transaction will not only always return the same data for each row, but also the same set of rows. And it further guarantees that the end result of any set of concurrent transactions will be a possible outcome of having run those same transactions in serial, in some order.

This last guarantee is one against serialization anomalies. A serialization anomaly is any sequence of events that produces a result that would be impossible if those transactions ran one-at-a-time (“in serial”). This guarantee is tricky to reason about, but the Postgres docs have a great example to demonstrate it.

A simple serialization anomaly

The example from the docs goes like this. You have the following table, mytab:

class | value
-------+-------
1 | 10
1 | 20
2 | 100
2 | 200

And the following two transactions run concurrently:

Transaction ATransaction BNotes
BEGIN;BEGIN;
SELECT SUM(value)
FROM mytab
WHERE class = 1;
The result is 30.
SELECT SUM(value)
FROM mytab
WHERE class = 2;
The result is 300.
INSERT INTO mytab
(class, value)
VALUES (2, 30);
INSERT INTO mytab
(class, value)
VALUES (1, 300);
COMMIT;COMMIT;

This is a serialization anomaly, since no serial ordering of A and B could produce the same end result:

  • A => B: Transaction B writes 330.
  • B => A: Transaction A writes 600.

The explanation is pretty straightforward once you walk through it. Each transaction writes data that the other reads in order to determine what data to write. So you end up with a situation that could never occur if the transactions had been run in serial.

But I also noticed this passage in the docs:

A READ ONLY transaction may be able to release its SIRead locks before completion, if it detects that no conflicts can still occur which could lead to a serialization anomaly. In fact, READ ONLY transactions will often be able to establish that fact at startup and avoid taking any predicate locks. If you explicitly request a SERIALIZABLE READ ONLY DEFERRABLE transaction, it will block until it can establish this fact.

This surprised me because, intuitively, I wouldn’t think that a READ ONLY transaction could participate in a serialization anomaly. How can it be that just reading data concurrently with other transactions could cause a result inconsistent with serial execution? Such a transaction can’t even lock rows. I racked my brains trying to think of such a situation, but came up empty.

A serialization anomaly from a READ ONLY transaction

Lucky for me, back in about 2000, somebody else racked their brains about it too, and produced this 3-page paper describing just such a situation. Right near the beginning, I read:

It has been widely assumed that, under SI [a.k.a. SERIALIZABLE transaction isolation mode], read-only transactions always execute serializably provided the concurrent update transactions are serializable. The reason for this is that all SI reads return values from a single instant of time when all committed transactions have completed their writes and no writes of non-committed transactions are visible. This seems to imply that read-only transactions will not read anomalous results so long as the update transactions with which they execute do not write such results.

“Yeah, exactly!” I thought. But it turns out to be possible, and here’s how it works.

Suppose we have the following table, representing bank account balances.

              Table "public.accounts"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+---------
id | integer | | not null |
value | integer | | not null |
Indexes:
"accounts_pkey" PRIMARY KEY, btree (id)

And suppose further that the following two accounts exist, with zero balance:

# SELECT * FROM accounts;
id | value
----+-------
1 | 0
2 | 0

These two accounts are owned by the same customer, and the sum of their balances must always be positive. If the sum goes negative, then a $1 overdraft fee is assessed. Now consider the following 3 DB transactions:

  • T1: Deposits $20 into account 2.
  • T2: Withdraws $10 from account 1.
  • T3: Retrieves the balances of both accounts and prints them out.

Let’s walk through these transactions and see how a serialization anomaly can emerge:

T1T2T3Notes
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SELECT id, value
FROM accounts
WHERE id IN (1,2);
T2 needs the balances of both accounts in order to know whether their sum is negative.
Both are 0.
SELECT value
FROM accounts
WHERE id = 2;
Still 0.
UPDATE accounts
SET value = 20
WHERE id = 2;
Deposits $20 into account 2.
COMMIT;T2 is now working with an outdated value.
SELECT id, value
FROM accounts
WHERE id IN (1,2);
T3 gets values (0, 20).
COMMIT;T3 sends this result to the printer.
UPDATE accounts
SET value = -11
WHERE id = 1;
Withdraws $10 from account 1, charging a $1 overdraft fee.
COMMIT;

When T2 tries to commit, it fails with the following error:

ERROR:  could not serialize access due to read/write dependencies among transactions
DETAIL: Reason code: Canceled on identification as a pivot, during write.
HINT: The transaction might succeed if retried.

Postgres has identified a potential serialization anomaly and prevented it by aborting T2. It’s a serialization anomaly because, if T2 were allowed to commit, then the customer would be charged an overdraft fee for withdrawing $10 despite having a print-out showing a total balance of $20. This could never happen if these transactions were run one-at-a-time in any order:

  • T1 => T2 => T3: print-out has a total balance of $10, and no overdraft fee is charged.
  • T1 => T3 => T2: print-out shows $20, and no overdraft fee is charged.
  • T2 => T1 => T3: print-out shows $9 ($-11 + $20).
  • T2 => T3 => T1: print-out shows $-11.
  • T3 => T1 => T2: print-out shows $0.
  • T3 => T2 => T1: print-out shows $0.

And, as we can also see, this serialization anomaly couldn’t occur if T3 were absent. Either of the possible orderings of T1 and T2 produces an internally consistent result.

Avoiding this problem

If you’re not using the Serializable isolation level, then instead of an error at COMMIT time, you just get a bug. And a scrap of paper that the SEC is going to have some embarrassing questions about.

So you can either use Serializable, or stick with a lower isolation level such as Read Committed and introduce a lock. For example, if T2 were to exclusively lock the row for account 1 when reading it, by using SELECT FOR UPDATE, then T1 would block until T2 committed. This would eliminate the bug.

Another fix, using the Repeatable Read isolation level, would be to add an additional row containing the sum of balances for each group of accounts, and require that any transaction updating one of those balances also update the sum row. Whichever transaction tried to update that row last would get an error and be forced to retry the transaction, thus eliminating the bug.

If you can, I recommend starting with Serializable, and only switching to a lower isolation level on a transaction-by-transaction basis when the performance cost of serializability checks becomes too high.

Concurrent locks and MultiXacts in Postgres

Pretty recently, I was troubleshooting a performance issue in a production Rails app backed by Postgres. There was this one class of query that would get slower and slower over the course of about an hour. The exact pathology is a tale for another time, but the investigation led me down the rabbit hole of a particular data structure in Postgres internals called a MultiXact.

Let’s look at what a MultiXact is, why it exists, and under what circumstances one comes into existence.

What the hell is a MultiXact?

A two-panel meme based on The Fifth Element. First panel shows Leeloo holding up her Multipass with the text, "Leeloo Dallas MultiXact". Second panel shows Korben, exasperated, saying "Yeah she knows it's a MultiXact"

In Postgres, it’s possible for two threads (or indeed, even three!) to lock the same row at the same time, as long as their locks don’t conflict with one another. The rules for what kinds of locks conflict with what other kinds of locks are admirably documented.

When you lock a row in Postgres, what you do is, you put your transaction ID in the xmax field of the corresponding tuple in storage (more on xmax on the Cybertec blog). This way, anyone who comes along looking for this row will know that you already have it locked. The latecomer can then wait for the lock:

Transaction 768Transaction 769Notes
SELECT *
FROM foo
WHERE x = 9
FOR UPDATE;
Transaction 768 now has a row lock. The row’s xmax field contains the value 768.
SELECT *
FROM foo
WHERE x = 9
FOR UPDATE;
Transaction 769 retrieves the current row, sees that Transaction 768 already holds a lock that conflicts with the lock it wants, and waits for Transaction 768 to be over.

But what if two threads both want to lock the same row simultaneously? For example:

Transaction 772Transaction 773Notes
SELECT *
FROM foo
WHERE x = 9
FOR SHARE;
Afterward, transaction 772 has a row lock. The row’s xmax field contains the value 772.
SELECT *
FROM foo
WHERE x = 9
FOR SHARE;
What happens now? 

Transaction 773 can’t just write its transaction ID into the xmax field. That would amount to preempting Transaction 772‘s lock… which would defeat the whole point of locking. To solve this problem, Postgres creates a MultiXact. A MultiXact essentially bundles together some set of transactions so that those transactions can all lock the same row at the same time. Instead of a transaction ID, a new MultiXact ID is written to the row’s xmax.

Transaction 772Transaction 773Notes
SELECT *
FROM foo
WHERE x = 9
FOR SHARE;
SELECT *
FROM foo
WHERE x = 9
FOR SHARE;
Now both transactions have the row locked. The row’s xmax field is set to 14, which is a MultiXact ID. MultiXact 14 refers to transactions 772 and 773 by their transaction IDs.
COMMIT;Transaction 773 is now over, but the row’s xmax value is still 14. Since MultiXacts are immutable, MultiXact 14 still refers to the now-defunct Transaction 773 as well as the ongoing Transaction 772.
COMMIT;With both transactions over, there are no remaining locks active on the row. Its xmax value is still 14, and will remain 14 until another thread locks the row or the table is vacuumed.

It bears repeating that MultiXacts are immutable. If transactions 104 and 108 both have row R locked as part of MultiXact 19, and transaction 117 locks row R too, transaction 117 can’t just join MultiXact 19. Instead, a new MultiXact with ID 20 is created, which contains 104, 108, and 117.

This means that, every time an additional transaction wants to lock a row, Postgres has to write the whole new MultiXact to a buffer. For large MultiXacts, the time-cost of all this reading and writing can become quite significant. Especially since access to the underlying data region is subject to a set of global mutexes.

Counterintuitively, a MultiXact doesn’t always involve multiple transactions.

When I first encountered MultiXacts in production, it was extra confusing, because I was seeing MultiXacts get created within a single transaction. “What gives?” I thought! “They’re supposed to be multi.”

Turns out, we were doing something like this:

BEGIN;
SELECT * FROM queue_jobs
WHERE id = 4
FOR SHARE;

SAVEPOINT foo;

SELECT *
FROM queue_jobs
WHERE id = 4
FOR UPDATE;

A savepoint doesn’t technically create a new transaction (I don’t think), but nevertheless, Postgres needs to keep track of the fact that the FOR UPDATE lock was taken after the savepoint, so that that lock can be relinquished in case of a subsequent ROLLBACK TO foo command. As a result, a new MultiXact gets created and its ID gets placed in the row’s xmax field.

I hope I’ve contributed to your understanding of MultiXacts in Postgres! It’s the sort of thing where you don’t ever need to think or know about it, until suddenly you do.