diff options
| author | Tom Lane <tgl@sss.pgh.pa.us> | 2006-03-31 23:32:07 +0000 |
|---|---|---|
| committer | Tom Lane <tgl@sss.pgh.pa.us> | 2006-03-31 23:32:07 +0000 |
| commit | a8b8f4db23cff16af50a2b960cb8d20d39b761cf (patch) | |
| tree | 224f8cb0da7e2e17ccfd7b8a030db1664acd46c1 /src/backend/catalog/index.c | |
| parent | 89395bfa6f2fafccec10be377fcf759030910654 (diff) | |
| download | postgresql-a8b8f4db23cff16af50a2b960cb8d20d39b761cf.tar.gz | |
Clean up WAL/buffer interactions as per my recent proposal. Get rid of the
misleadingly-named WriteBuffer routine, and instead require routines that
change buffer pages to call MarkBufferDirty (which does exactly what it says).
We also require that they do so before calling XLogInsert; this takes care of
the synchronization requirement documented in SyncOneBuffer. Note that
because bufmgr takes the buffer content lock (in shared mode) while writing
out any buffer, it doesn't matter whether MarkBufferDirty is executed before
the buffer content change is complete, so long as the content change is
completed before releasing exclusive lock on the buffer. So it's OK to set
the dirtybit before we fill in the LSN.
This eliminates the former kluge of needing to set the dirtybit in LockBuffer.
Aside from making the code more transparent, we can also add some new
debugging assertions, in particular that the caller of MarkBufferDirty must
hold the buffer content lock, not merely a pin.
Diffstat (limited to 'src/backend/catalog/index.c')
| -rw-r--r-- | src/backend/catalog/index.c | 11 |
1 files changed, 4 insertions, 7 deletions
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c index 1a5c3b3c3b..75302edfaf 100644 --- a/src/backend/catalog/index.c +++ b/src/backend/catalog/index.c @@ -8,7 +8,7 @@ * * * IDENTIFICATION - * $PostgreSQL: pgsql/src/backend/catalog/index.c,v 1.264 2006/03/24 23:02:17 tgl Exp $ + * $PostgreSQL: pgsql/src/backend/catalog/index.c,v 1.265 2006/03/31 23:32:06 tgl Exp $ * * * INTERFACE ROUTINES @@ -1066,12 +1066,9 @@ setRelhasindex(Oid relid, bool hasindex, bool isprimary, Oid reltoastidxid) } if (pg_class_scan) - LockBuffer(pg_class_scan->rs_cbuf, BUFFER_LOCK_UNLOCK); - - if (pg_class_scan) { - /* Write the modified tuple in-place */ - WriteNoReleaseBuffer(pg_class_scan->rs_cbuf); + MarkBufferDirty(pg_class_scan->rs_cbuf); + LockBuffer(pg_class_scan->rs_cbuf, BUFFER_LOCK_UNLOCK); /* Send out shared cache inval if necessary */ if (!IsBootstrapProcessingMode()) CacheInvalidateHeapTuple(pg_class, tuple); @@ -1294,8 +1291,8 @@ UpdateStats(Oid relid, double reltuples) LockBuffer(pg_class_scan->rs_cbuf, BUFFER_LOCK_EXCLUSIVE); rd_rel->relpages = (int32) relpages; rd_rel->reltuples = (float4) reltuples; + MarkBufferDirty(pg_class_scan->rs_cbuf); LockBuffer(pg_class_scan->rs_cbuf, BUFFER_LOCK_UNLOCK); - WriteNoReleaseBuffer(pg_class_scan->rs_cbuf); if (!IsBootstrapProcessingMode()) CacheInvalidateHeapTuple(pg_class, tuple); } |
