Skip to content

fix: use atomic writes for parallel WAL restore spool#198

Merged
fcanovai merged 1 commit intocloudnative-pg:mainfrom
lambcode-unified:bl-fix-parallel-wal-restore-race
Mar 11, 2026
Merged

fix: use atomic writes for parallel WAL restore spool#198
fcanovai merged 1 commit intocloudnative-pg:mainfrom
lambcode-unified:bl-fix-parallel-wal-restore-race

Conversation

@lambcode-unified
Copy link
Contributor

Fixes cloudnative-pg/cloudnative-pg#10092: a race condition where MoveOut could read partially-Summary

Fixes a race condition in parallel WAL restore (cloudnative-pg/cloudnative-pg#10092) that can cause PostgreSQL recovery to fail with "invalid checkpoint record" errors when maxParallel > 1.

Problem

When restoring WAL files in parallel, prefetched files are written directly to the spool directory with their final filename. If PostgreSQL requests a file while it's still being downloaded, MoveOut()
can copy a partially-written file to pg_wal/, causing recovery to fail.

Timeline of race condition:
T0: Goroutine starts downloading WAL-B to spool/WAL-B
T1: Download in progress (file partially written)
T2: PostgreSQL requests WAL-B
T3: MoveOut() sees file exists, copies partial file to pg_wal/
T4: PostgreSQL reads corrupt WAL → "invalid checkpoint record"

Solution

Use atomic writes with a staging pattern:

  1. Download prefetched WALs to .tmp in the spool directory
  2. After successful download, atomically rename .tmp to final name
  3. Contains() and MoveOut() only see files without .tmp suffix

Since the temp file and final file are on the same filesystem, os.Rename() is atomic on POSIX systems - there's no window where a partial file is visible.

Changes

  • pkg/spool/spool.go: Added TempFileName(), Commit(), and CleanupTemp() methods
  • pkg/restorer/restorer.go: Modified RestoreList() to use temp files for prefetched WALs
  • pkg/spool/spool_test.go: Added 7 tests including race condition verification

Testing

=== RUN TestCatalog
Ran 10 of 10 Specs in 0.003 seconds
SUCCESS! -- 10 Passed | 0 Failed | 0 Pending | 0 Skipped

Key tests that verify the fix:

  • Contains does NOT see temp files
  • MoveOut does NOT see temp files

@lambcode-unified lambcode-unified requested a review from a team as a code owner February 27, 2026 15:36
@leonardoce leonardoce force-pushed the bl-fix-parallel-wal-restore-race branch from 7547e08 to 9727f4c Compare March 6, 2026 13:15
@leonardoce
Copy link
Contributor

Thank you @lambcode-unified for your contribution! Can you please sign-off your commit?

@lambcode-unified lambcode-unified force-pushed the bl-fix-parallel-wal-restore-race branch from 9727f4c to 8e7f5c0 Compare March 6, 2026 17:03
@lambcode-unified
Copy link
Contributor Author

Thank you @lambcode-unified for your contribution! Can you please sign-off your commit?

Done!

@mnencia mnencia force-pushed the bl-fix-parallel-wal-restore-race branch 2 times, most recently from 802c6ba to cd3f496 Compare March 9, 2026 13:31
@leonardoce
Copy link
Contributor

I tested it manually with replica clusters and with the barman-cloud CNPG-i plugin.
It fixes the issues correctly. I'm approving it.

Fixes a race condition where MoveOut could read partially-written
WAL files during parallel restore, causing "invalid checkpoint record"
errors.

Downloads now write to .tmp files and atomically rename on completion.

Assisted-by: Claude

Signed-off-by: Brian Lamb <244594801+lambcode-unified@users.noreply.github.com>
@sxd sxd force-pushed the bl-fix-parallel-wal-restore-race branch from cd3f496 to 5441a83 Compare March 11, 2026 09:05
@fcanovai fcanovai merged commit e89e4ac into cloudnative-pg:main Mar 11, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Race condition in parallel WAL restore causes "invalid checkpoint record" errors when maxParallel > 1

3 participants