Fix memory leak in LogStandbySnapshot().

The array allocated by GetRunningTransactionLocks() needs to be pfree'd
when we're done with it.  Otherwise we leak some memory during each
checkpoint, if wal_level = hot_standby.  This manifests as memory bloat
in the checkpointer process, or in bgwriter in versions before we made
the checkpointer separate.

Reported and fixed by Naoya Anzai.  Back-patch to 9.0 where the issue
was introduced.

In passing, improve comments for GetRunningTransactionLocks(), and add
an Assert that we didn't overrun the palloc'd array.
This commit is contained in:
Tom Lane 2013-06-04 14:58:46 -04:00
parent 79e15c7d86
commit dbc6eb1f4b
2 changed files with 14 additions and 9 deletions

View File

@ -865,16 +865,11 @@ LogStandbySnapshot(void)
/* /*
* Get details of any AccessExclusiveLocks being held at the moment. * Get details of any AccessExclusiveLocks being held at the moment.
*
* XXX GetRunningTransactionLocks() currently holds a lock on all
* partitions though it is possible to further optimise the locking. By
* reference counting locks and storing the value on the ProcArray entry
* for each backend we can easily tell if any locks need recording without
* trying to acquire the partition locks and scanning the lock table.
*/ */
locks = GetRunningTransactionLocks(&nlocks); locks = GetRunningTransactionLocks(&nlocks);
if (nlocks > 0) if (nlocks > 0)
LogAccessExclusiveLocks(nlocks, locks); LogAccessExclusiveLocks(nlocks, locks);
pfree(locks);
/* /*
* Log details of all in-progress transactions. This should be the last * Log details of all in-progress transactions. This should be the last

View File

@ -3398,18 +3398,26 @@ GetLockStatusData(void)
} }
/* /*
* Returns a list of currently held AccessExclusiveLocks, for use * Returns a list of currently held AccessExclusiveLocks, for use by
* by GetRunningTransactionData(). * LogStandbySnapshot(). The result is a palloc'd array,
* with the number of elements returned into *nlocks.
*
* XXX This currently takes a lock on all partitions of the lock table,
* but it's possible to do better. By reference counting locks and storing
* the value in the ProcArray entry for each backend we could tell if any
* locks need recording without having to acquire the partition locks and
* scan the lock table. Whether that's worth the additional overhead
* is pretty dubious though.
*/ */
xl_standby_lock * xl_standby_lock *
GetRunningTransactionLocks(int *nlocks) GetRunningTransactionLocks(int *nlocks)
{ {
xl_standby_lock *accessExclusiveLocks;
PROCLOCK *proclock; PROCLOCK *proclock;
HASH_SEQ_STATUS seqstat; HASH_SEQ_STATUS seqstat;
int i; int i;
int index; int index;
int els; int els;
xl_standby_lock *accessExclusiveLocks;
/* /*
* Acquire lock on the entire shared lock data structure. * Acquire lock on the entire shared lock data structure.
@ -3467,6 +3475,8 @@ GetRunningTransactionLocks(int *nlocks)
} }
} }
Assert(index <= els);
/* /*
* And release locks. We do this in reverse order for two reasons: (1) * And release locks. We do this in reverse order for two reasons: (1)
* Anyone else who needs more than one of the locks will be trying to lock * Anyone else who needs more than one of the locks will be trying to lock