mirror of
https://git.postgresql.org/git/postgresql.git
synced 2024-12-21 08:29:39 +08:00
Create a "fast path" for acquiring weak relation locks.
When an AccessShareLock, RowShareLock, or RowExclusiveLock is requested on an unshared database relation, and we can verify that no conflicting locks can possibly be present, record the lock in a per-backend queue, stored within the PGPROC, rather than in the primary lock table. This eliminates a great deal of contention on the lock manager LWLocks. This patch also refactors the interface between GetLockStatusData() and pg_lock_status() to be a bit more abstract, so that we don't rely so heavily on the lock manager's internal representation details. The new fast path lock structures don't have a LOCK or PROCLOCK structure to return, so we mustn't depend on that for purposes of listing outstanding locks. Review by Jeff Davis.
This commit is contained in:
parent
7ed8f6c517
commit
3cba8999b3
@ -7040,6 +7040,12 @@
|
||||
<entry></entry>
|
||||
<entry>True if lock is held, false if lock is awaited</entry>
|
||||
</row>
|
||||
<row>
|
||||
<entry><structfield>fastpath</structfield></entry>
|
||||
<entry><type>boolean</type></entry>
|
||||
<entry></entry>
|
||||
<entry>True if lock was taken via fast path, false if taken via main lock table</entry>
|
||||
</row>
|
||||
</tbody>
|
||||
</tgroup>
|
||||
</table>
|
||||
@ -7090,16 +7096,29 @@
|
||||
<para>
|
||||
The <structname>pg_locks</structname> view displays data from both the
|
||||
regular lock manager and the predicate lock manager, which are
|
||||
separate systems. When this view is accessed, the internal data
|
||||
structures of each lock manager are momentarily locked, and copies are
|
||||
made for the view to display. Each lock manager will therefore
|
||||
produce a consistent set of results, but as we do not lock both lock
|
||||
managers simultaneously, it is possible for locks to be taken or
|
||||
released after we interrogate the regular lock manager and before we
|
||||
interrogate the predicate lock manager. Each lock manager is only
|
||||
locked for the minimum possible time so as to reduce the performance
|
||||
impact of querying this view, but there could nevertheless be some
|
||||
impact on database performance if it is frequently accessed.
|
||||
separate systems. This data is not guaranteed to be entirely consistent.
|
||||
Data on fast-path locks (with <structfield>fastpath</> = <literal>true</>)
|
||||
is gathered from each backend one at a time, without freezing the state of
|
||||
the entire lock manager, so it is possible for locks to be taken and
|
||||
released as information is gathered. Note, however, that these locks are
|
||||
known not to conflict with any other lock currently in place. After
|
||||
all backends have been queried for fast-path locks, the remainder of the
|
||||
lock manager is locked as a unit, and a consistent snapshot of all
|
||||
remaining locks is dumped as an atomic action. Once the lock manager has
|
||||
been unlocked, the predicate lock manager is similarly locked and all
|
||||
predicate locks are dumped as an atomic action. Thus, with the exception
|
||||
of fast-path locks, each lock manager will deliver a consistent set of
|
||||
results, but as we do not lock both lock managers simultaneously, it is
|
||||
possible for locks to be taken or released after we interrogate the regular
|
||||
lock manager and before we interrogate the predicate lock manager.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Locking the lock manger and/or predicate lock manager could have some
|
||||
impact on database performance if this view is very frequently accessed.
|
||||
The locks are held only for the minimum amount of time necessary to
|
||||
obtain data from the lock manager, but this does not completely eliminate
|
||||
the possibility of a performance impact.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
|
@ -4592,7 +4592,6 @@ MaxLivePostmasterChildren(void)
|
||||
extern slock_t *ShmemLock;
|
||||
extern LWLock *LWLockArray;
|
||||
extern slock_t *ProcStructLock;
|
||||
extern PROC_HDR *ProcGlobal;
|
||||
extern PGPROC *AuxiliaryProcs;
|
||||
extern PMSignalData *PMSignalState;
|
||||
extern pgsocket pgStatSock;
|
||||
|
@ -60,20 +60,29 @@ identical lock mode sets. See src/tools/backend/index.html and
|
||||
src/include/storage/lock.h for more details. (Lock modes are also called
|
||||
lock types in some places in the code and documentation.)
|
||||
|
||||
There are two fundamental lock structures in shared memory: the
|
||||
per-lockable-object LOCK struct, and the per-lock-and-requestor PROCLOCK
|
||||
struct. A LOCK object exists for each lockable object that currently has
|
||||
locks held or requested on it. A PROCLOCK struct exists for each backend
|
||||
that is holding or requesting lock(s) on each LOCK object.
|
||||
There are two main methods for recording locks in shared memory. The primary
|
||||
mechanism uses two main structures: the per-lockable-object LOCK struct, and
|
||||
the per-lock-and-requestor PROCLOCK struct. A LOCK object exists for each
|
||||
lockable object that currently has locks held or requested on it. A PROCLOCK
|
||||
struct exists for each backend that is holding or requesting lock(s) on each
|
||||
LOCK object.
|
||||
|
||||
In addition to these, each backend maintains an unshared LOCALLOCK structure
|
||||
for each lockable object and lock mode that it is currently holding or
|
||||
requesting. The shared lock structures only allow a single lock grant to
|
||||
be made per lockable object/lock mode/backend. Internally to a backend,
|
||||
however, the same lock may be requested and perhaps released multiple times
|
||||
in a transaction, and it can also be held both transactionally and session-
|
||||
wide. The internal request counts are held in LOCALLOCK so that the shared
|
||||
data structures need not be accessed to alter them.
|
||||
There is also a special "fast path" mechanism which backends may use to
|
||||
record a limited number of locks with very specific characteristics: they must
|
||||
use the DEFAULT lockmethod; they must represent a lock on a database relation
|
||||
(not a shared relation), they must be a "weak" lock which is unlikely to
|
||||
conflict (AccessShareLock, RowShareLock, or RowExclusiveLock); and the system
|
||||
must be able to quickly verify that no conflicting locks could possibly be
|
||||
present. See "Fast Path Locking", below, for more details.
|
||||
|
||||
Each backend also maintains an unshared LOCALLOCK structure for each lockable
|
||||
object and lock mode that it is currently holding or requesting. The shared
|
||||
lock structures only allow a single lock grant to be made per lockable
|
||||
object/lock mode/backend. Internally to a backend, however, the same lock may
|
||||
be requested and perhaps released multiple times in a transaction, and it can
|
||||
also be held both transactionally and session-wide. The internal request
|
||||
counts are held in LOCALLOCK so that the shared data structures need not be
|
||||
accessed to alter them.
|
||||
|
||||
---------------------------------------------------------------------------
|
||||
|
||||
@ -250,6 +259,65 @@ tradeoff: we could instead recalculate the partition number from the LOCKTAG
|
||||
when needed.
|
||||
|
||||
|
||||
Fast Path Locking
|
||||
-----------------
|
||||
|
||||
Fast path locking is a special purpose mechanism designed to reduce the
|
||||
overhead of taking and releasing weak relation locks. SELECT, INSERT,
|
||||
UPDATE, and DELETE must acquire a lock on every relation they operate on,
|
||||
as well as various system catalogs that can be used internally. These locks
|
||||
are notable not only for the very high frequency with which they are taken
|
||||
and released, but also for the fact that they virtually never conflict.
|
||||
Many DML operations can proceed in parallel against the same table at the
|
||||
same time; only DDL operations such as CLUSTER, ALTER TABLE, or DROP -- or
|
||||
explicit user action such as LOCK TABLE -- will create lock conflicts with
|
||||
the "weak" locks (AccessShareLock, RowShareLock, RowExclusiveLock) acquired
|
||||
by DML operations.
|
||||
|
||||
The primary locking mechanism does not cope well with this workload. Even
|
||||
though the lock manager locks are partitioned, the locktag for any given
|
||||
relation still falls in one, and only one, partition. Thus, if many short
|
||||
queries are accessing the same relation, the lock manager partition lock for
|
||||
that partition becomes a contention bottleneck. This effect is measurable
|
||||
even on 2-core servers, and becomes very pronounced as core count increases.
|
||||
|
||||
To alleviate this bottleneck, beginning in PostgreSQL 9.2, each backend is
|
||||
permitted to record a limited number of locks on unshared relations in an
|
||||
array within its PGPROC structure, rather than using the primary lock table.
|
||||
This is called the "fast path" mechanism, and can only be used when the
|
||||
locker can verify that no conflicting locks can possibly exist.
|
||||
|
||||
A key point of this algorithm is that it must be possible to verify the
|
||||
absence of possibly conflicting locks without fighting over a shared LWLock or
|
||||
spinlock. Otherwise, this effort would simply move the contention bottleneck
|
||||
from one place to another. We accomplish this using an array of 1024 integer
|
||||
counters, which are in effect a 1024-way partitioning of the lock space. Each
|
||||
counter records the number of "strong" locks (that is, ShareLock,
|
||||
ShareRowExclusiveLock, ExclusiveLock, and AccessExclusiveLock) on unshared
|
||||
relations that fall into that partition. When this counter is non-zero, the
|
||||
fast path mechanism may not be used for relation locks in that partition. A
|
||||
strong locker bumps the counter and then scans each per-backend array for
|
||||
matching fast-path locks; any which are found must be transferred to the
|
||||
primary lock table before attempting to acquire the lock, to ensure proper
|
||||
lock conflict and deadlock detection.
|
||||
|
||||
On an SMP system, we must guarantee proper memory synchronization. Here we
|
||||
rely on the fact that LWLock acquisition acts as a memory sequence point: if
|
||||
A performs a store, A and B both acquire an LWLock in either order, and B
|
||||
then performs a load on the same memory location, it is guaranteed to see
|
||||
A's store. In this case, each backend's fast-path lock queue is protected
|
||||
by an LWLock. A backend wishing to acquire a fast-path lock grabs this
|
||||
LWLock before examining FastPathStrongLocks to check for the presence of a
|
||||
conflicting strong lock. And the backend attempting to acquire a strong
|
||||
lock, because it must transfer any matching weak locks taken via the fast-path
|
||||
mechanism to the shared lock table, will acquire every LWLock protecting
|
||||
a backend fast-path queue in turn. Thus, if we examine FastPathStrongLocks
|
||||
and see a zero, then either the value is truly zero, or if it is a stale value,
|
||||
the strong locker has yet to acquire the per-backend LWLock we now hold (or,
|
||||
indeed, even the first per-backend LWLock) and will notice any weak lock we
|
||||
take when it does.
|
||||
|
||||
|
||||
The Deadlock Detection Algorithm
|
||||
--------------------------------
|
||||
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -167,6 +167,9 @@ NumLWLocks(void)
|
||||
/* bufmgr.c needs two for each shared buffer */
|
||||
numLocks += 2 * NBuffers;
|
||||
|
||||
/* lock.c needs one per backend */
|
||||
numLocks += MaxBackends;
|
||||
|
||||
/* clog.c needs one per CLOG buffer */
|
||||
numLocks += NUM_CLOG_BUFFERS;
|
||||
|
||||
|
@ -67,7 +67,7 @@ PGPROC *MyProc = NULL;
|
||||
NON_EXEC_STATIC slock_t *ProcStructLock = NULL;
|
||||
|
||||
/* Pointers to shared-memory structures */
|
||||
NON_EXEC_STATIC PROC_HDR *ProcGlobal = NULL;
|
||||
PROC_HDR *ProcGlobal = NULL;
|
||||
NON_EXEC_STATIC PGPROC *AuxiliaryProcs = NULL;
|
||||
|
||||
/* If we are waiting for a lock, this points to the associated LOCALLOCK */
|
||||
@ -183,6 +183,8 @@ InitProcGlobal(void)
|
||||
* one of these purposes, and they do not move between groups.
|
||||
*/
|
||||
procs = (PGPROC *) ShmemAlloc(TotalProcs * sizeof(PGPROC));
|
||||
ProcGlobal->allProcs = procs;
|
||||
ProcGlobal->allProcCount = TotalProcs;
|
||||
if (!procs)
|
||||
ereport(FATAL,
|
||||
(errcode(ERRCODE_OUT_OF_MEMORY),
|
||||
@ -192,6 +194,7 @@ InitProcGlobal(void)
|
||||
{
|
||||
/* Common initialization for all PGPROCs, regardless of type. */
|
||||
PGSemaphoreCreate(&(procs[i].sem));
|
||||
procs[i].backendLock = LWLockAssign();
|
||||
InitSharedLatch(&procs[i].waitLatch);
|
||||
|
||||
/*
|
||||
|
@ -49,6 +49,8 @@ typedef struct
|
||||
int predLockIdx; /* current index for pred lock */
|
||||
} PG_Lock_Status;
|
||||
|
||||
/* Number of columns in pg_locks output */
|
||||
#define NUM_LOCK_STATUS_COLUMNS 15
|
||||
|
||||
/*
|
||||
* VXIDGetDatum - Construct a text representation of a VXID
|
||||
@ -96,7 +98,7 @@ pg_lock_status(PG_FUNCTION_ARGS)
|
||||
|
||||
/* build tupdesc for result tuples */
|
||||
/* this had better match pg_locks view in system_views.sql */
|
||||
tupdesc = CreateTemplateTupleDesc(14, false);
|
||||
tupdesc = CreateTemplateTupleDesc(NUM_LOCK_STATUS_COLUMNS, false);
|
||||
TupleDescInitEntry(tupdesc, (AttrNumber) 1, "locktype",
|
||||
TEXTOID, -1, 0);
|
||||
TupleDescInitEntry(tupdesc, (AttrNumber) 2, "database",
|
||||
@ -125,6 +127,8 @@ pg_lock_status(PG_FUNCTION_ARGS)
|
||||
TEXTOID, -1, 0);
|
||||
TupleDescInitEntry(tupdesc, (AttrNumber) 14, "granted",
|
||||
BOOLOID, -1, 0);
|
||||
TupleDescInitEntry(tupdesc, (AttrNumber) 15, "fastpath",
|
||||
BOOLOID, -1, 0);
|
||||
|
||||
funcctx->tuple_desc = BlessTupleDesc(tupdesc);
|
||||
|
||||
@ -149,21 +153,17 @@ pg_lock_status(PG_FUNCTION_ARGS)
|
||||
|
||||
while (mystatus->currIdx < lockData->nelements)
|
||||
{
|
||||
PROCLOCK *proclock;
|
||||
LOCK *lock;
|
||||
PGPROC *proc;
|
||||
bool granted;
|
||||
LOCKMODE mode = 0;
|
||||
const char *locktypename;
|
||||
char tnbuf[32];
|
||||
Datum values[14];
|
||||
bool nulls[14];
|
||||
Datum values[NUM_LOCK_STATUS_COLUMNS];
|
||||
bool nulls[NUM_LOCK_STATUS_COLUMNS];
|
||||
HeapTuple tuple;
|
||||
Datum result;
|
||||
LockInstanceData *instance;
|
||||
|
||||
proclock = &(lockData->proclocks[mystatus->currIdx]);
|
||||
lock = &(lockData->locks[mystatus->currIdx]);
|
||||
proc = &(lockData->procs[mystatus->currIdx]);
|
||||
instance = &(lockData->locks[mystatus->currIdx]);
|
||||
|
||||
/*
|
||||
* Look to see if there are any held lock modes in this PROCLOCK. If
|
||||
@ -171,14 +171,14 @@ pg_lock_status(PG_FUNCTION_ARGS)
|
||||
* again.
|
||||
*/
|
||||
granted = false;
|
||||
if (proclock->holdMask)
|
||||
if (instance->holdMask)
|
||||
{
|
||||
for (mode = 0; mode < MAX_LOCKMODES; mode++)
|
||||
{
|
||||
if (proclock->holdMask & LOCKBIT_ON(mode))
|
||||
if (instance->holdMask & LOCKBIT_ON(mode))
|
||||
{
|
||||
granted = true;
|
||||
proclock->holdMask &= LOCKBIT_OFF(mode);
|
||||
instance->holdMask &= LOCKBIT_OFF(mode);
|
||||
break;
|
||||
}
|
||||
}
|
||||
@ -190,10 +190,10 @@ pg_lock_status(PG_FUNCTION_ARGS)
|
||||
*/
|
||||
if (!granted)
|
||||
{
|
||||
if (proc->waitLock == proclock->tag.myLock)
|
||||
if (instance->waitLockMode != NoLock)
|
||||
{
|
||||
/* Yes, so report it with proper mode */
|
||||
mode = proc->waitLockMode;
|
||||
mode = instance->waitLockMode;
|
||||
|
||||
/*
|
||||
* We are now done with this PROCLOCK, so advance pointer to
|
||||
@ -218,22 +218,22 @@ pg_lock_status(PG_FUNCTION_ARGS)
|
||||
MemSet(values, 0, sizeof(values));
|
||||
MemSet(nulls, false, sizeof(nulls));
|
||||
|
||||
if (lock->tag.locktag_type <= LOCKTAG_LAST_TYPE)
|
||||
locktypename = LockTagTypeNames[lock->tag.locktag_type];
|
||||
if (instance->locktag.locktag_type <= LOCKTAG_LAST_TYPE)
|
||||
locktypename = LockTagTypeNames[instance->locktag.locktag_type];
|
||||
else
|
||||
{
|
||||
snprintf(tnbuf, sizeof(tnbuf), "unknown %d",
|
||||
(int) lock->tag.locktag_type);
|
||||
(int) instance->locktag.locktag_type);
|
||||
locktypename = tnbuf;
|
||||
}
|
||||
values[0] = CStringGetTextDatum(locktypename);
|
||||
|
||||
switch ((LockTagType) lock->tag.locktag_type)
|
||||
switch ((LockTagType) instance->locktag.locktag_type)
|
||||
{
|
||||
case LOCKTAG_RELATION:
|
||||
case LOCKTAG_RELATION_EXTEND:
|
||||
values[1] = ObjectIdGetDatum(lock->tag.locktag_field1);
|
||||
values[2] = ObjectIdGetDatum(lock->tag.locktag_field2);
|
||||
values[1] = ObjectIdGetDatum(instance->locktag.locktag_field1);
|
||||
values[2] = ObjectIdGetDatum(instance->locktag.locktag_field2);
|
||||
nulls[3] = true;
|
||||
nulls[4] = true;
|
||||
nulls[5] = true;
|
||||
@ -243,9 +243,9 @@ pg_lock_status(PG_FUNCTION_ARGS)
|
||||
nulls[9] = true;
|
||||
break;
|
||||
case LOCKTAG_PAGE:
|
||||
values[1] = ObjectIdGetDatum(lock->tag.locktag_field1);
|
||||
values[2] = ObjectIdGetDatum(lock->tag.locktag_field2);
|
||||
values[3] = UInt32GetDatum(lock->tag.locktag_field3);
|
||||
values[1] = ObjectIdGetDatum(instance->locktag.locktag_field1);
|
||||
values[2] = ObjectIdGetDatum(instance->locktag.locktag_field2);
|
||||
values[3] = UInt32GetDatum(instance->locktag.locktag_field3);
|
||||
nulls[4] = true;
|
||||
nulls[5] = true;
|
||||
nulls[6] = true;
|
||||
@ -254,10 +254,10 @@ pg_lock_status(PG_FUNCTION_ARGS)
|
||||
nulls[9] = true;
|
||||
break;
|
||||
case LOCKTAG_TUPLE:
|
||||
values[1] = ObjectIdGetDatum(lock->tag.locktag_field1);
|
||||
values[2] = ObjectIdGetDatum(lock->tag.locktag_field2);
|
||||
values[3] = UInt32GetDatum(lock->tag.locktag_field3);
|
||||
values[4] = UInt16GetDatum(lock->tag.locktag_field4);
|
||||
values[1] = ObjectIdGetDatum(instance->locktag.locktag_field1);
|
||||
values[2] = ObjectIdGetDatum(instance->locktag.locktag_field2);
|
||||
values[3] = UInt32GetDatum(instance->locktag.locktag_field3);
|
||||
values[4] = UInt16GetDatum(instance->locktag.locktag_field4);
|
||||
nulls[5] = true;
|
||||
nulls[6] = true;
|
||||
nulls[7] = true;
|
||||
@ -265,7 +265,8 @@ pg_lock_status(PG_FUNCTION_ARGS)
|
||||
nulls[9] = true;
|
||||
break;
|
||||
case LOCKTAG_TRANSACTION:
|
||||
values[6] = TransactionIdGetDatum(lock->tag.locktag_field1);
|
||||
values[6] =
|
||||
TransactionIdGetDatum(instance->locktag.locktag_field1);
|
||||
nulls[1] = true;
|
||||
nulls[2] = true;
|
||||
nulls[3] = true;
|
||||
@ -276,8 +277,8 @@ pg_lock_status(PG_FUNCTION_ARGS)
|
||||
nulls[9] = true;
|
||||
break;
|
||||
case LOCKTAG_VIRTUALTRANSACTION:
|
||||
values[5] = VXIDGetDatum(lock->tag.locktag_field1,
|
||||
lock->tag.locktag_field2);
|
||||
values[5] = VXIDGetDatum(instance->locktag.locktag_field1,
|
||||
instance->locktag.locktag_field2);
|
||||
nulls[1] = true;
|
||||
nulls[2] = true;
|
||||
nulls[3] = true;
|
||||
@ -291,10 +292,10 @@ pg_lock_status(PG_FUNCTION_ARGS)
|
||||
case LOCKTAG_USERLOCK:
|
||||
case LOCKTAG_ADVISORY:
|
||||
default: /* treat unknown locktags like OBJECT */
|
||||
values[1] = ObjectIdGetDatum(lock->tag.locktag_field1);
|
||||
values[7] = ObjectIdGetDatum(lock->tag.locktag_field2);
|
||||
values[8] = ObjectIdGetDatum(lock->tag.locktag_field3);
|
||||
values[9] = Int16GetDatum(lock->tag.locktag_field4);
|
||||
values[1] = ObjectIdGetDatum(instance->locktag.locktag_field1);
|
||||
values[7] = ObjectIdGetDatum(instance->locktag.locktag_field2);
|
||||
values[8] = ObjectIdGetDatum(instance->locktag.locktag_field3);
|
||||
values[9] = Int16GetDatum(instance->locktag.locktag_field4);
|
||||
nulls[2] = true;
|
||||
nulls[3] = true;
|
||||
nulls[4] = true;
|
||||
@ -303,13 +304,14 @@ pg_lock_status(PG_FUNCTION_ARGS)
|
||||
break;
|
||||
}
|
||||
|
||||
values[10] = VXIDGetDatum(proc->backendId, proc->lxid);
|
||||
if (proc->pid != 0)
|
||||
values[11] = Int32GetDatum(proc->pid);
|
||||
values[10] = VXIDGetDatum(instance->backend, instance->lxid);
|
||||
if (instance->pid != 0)
|
||||
values[11] = Int32GetDatum(instance->pid);
|
||||
else
|
||||
nulls[11] = true;
|
||||
values[12] = CStringGetTextDatum(GetLockmodeName(LOCK_LOCKMETHOD(*lock), mode));
|
||||
values[12] = CStringGetTextDatum(GetLockmodeName(instance->locktag.locktag_lockmethodid, mode));
|
||||
values[13] = BoolGetDatum(granted);
|
||||
values[14] = BoolGetDatum(instance->fastpath);
|
||||
|
||||
tuple = heap_form_tuple(funcctx->tuple_desc, values, nulls);
|
||||
result = HeapTupleGetDatum(tuple);
|
||||
@ -327,8 +329,8 @@ pg_lock_status(PG_FUNCTION_ARGS)
|
||||
|
||||
PREDICATELOCKTARGETTAG *predTag = &(predLockData->locktags[mystatus->predLockIdx]);
|
||||
SERIALIZABLEXACT *xact = &(predLockData->xacts[mystatus->predLockIdx]);
|
||||
Datum values[14];
|
||||
bool nulls[14];
|
||||
Datum values[NUM_LOCK_STATUS_COLUMNS];
|
||||
bool nulls[NUM_LOCK_STATUS_COLUMNS];
|
||||
HeapTuple tuple;
|
||||
Datum result;
|
||||
|
||||
@ -374,11 +376,12 @@ pg_lock_status(PG_FUNCTION_ARGS)
|
||||
nulls[11] = true;
|
||||
|
||||
/*
|
||||
* Lock mode. Currently all predicate locks are SIReadLocks, which are
|
||||
* always held (never waiting)
|
||||
* Lock mode. Currently all predicate locks are SIReadLocks, which
|
||||
* are always held (never waiting) and have no fast path
|
||||
*/
|
||||
values[12] = CStringGetTextDatum("SIReadLock");
|
||||
values[13] = BoolGetDatum(true);
|
||||
values[14] = BoolGetDatum(false);
|
||||
|
||||
tuple = heap_form_tuple(funcctx->tuple_desc, values, nulls);
|
||||
result = HeapTupleGetDatum(tuple);
|
||||
|
@ -2811,7 +2811,7 @@ DATA(insert OID = 2078 ( set_config PGNSP PGUID 12 1 0 0 0 f f f f f v 3 0 25
|
||||
DESCR("SET X as a function");
|
||||
DATA(insert OID = 2084 ( pg_show_all_settings PGNSP PGUID 12 1 1000 0 0 f f f t t s 0 0 2249 "" "{25,25,25,25,25,25,25,25,25,25,25,1009,25,25,25,23}" "{o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}" "{name,setting,unit,category,short_desc,extra_desc,context,vartype,source,min_val,max_val,enumvals,boot_val,reset_val,sourcefile,sourceline}" _null_ show_all_settings _null_ _null_ _null_ ));
|
||||
DESCR("SHOW ALL as a function");
|
||||
DATA(insert OID = 1371 ( pg_lock_status PGNSP PGUID 12 1 1000 0 0 f f f t t v 0 0 2249 "" "{25,26,26,23,21,25,28,26,26,21,25,23,25,16}" "{o,o,o,o,o,o,o,o,o,o,o,o,o,o}" "{locktype,database,relation,page,tuple,virtualxid,transactionid,classid,objid,objsubid,virtualtransaction,pid,mode,granted}" _null_ pg_lock_status _null_ _null_ _null_ ));
|
||||
DATA(insert OID = 1371 ( pg_lock_status PGNSP PGUID 12 1 1000 0 0 f f f t t v 0 0 2249 "" "{25,26,26,23,21,25,28,26,26,21,25,23,25,16,16}" "{o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}" "{locktype,database,relation,page,tuple,virtualxid,transactionid,classid,objid,objsubid,virtualtransaction,pid,mode,granted,fastpath}" _null_ pg_lock_status _null_ _null_ _null_ ));
|
||||
DESCR("view system lock information");
|
||||
DATA(insert OID = 1065 ( pg_prepared_xact PGNSP PGUID 12 1 1000 0 0 f f f t t v 0 0 2249 "" "{28,25,1184,26,26}" "{o,o,o,o,o}" "{transaction,gid,prepared,ownerid,dbid}" _null_ pg_prepared_xact _null_ _null_ _null_ ));
|
||||
DESCR("view two-phase transactions");
|
||||
|
@ -412,6 +412,7 @@ typedef struct LOCALLOCK
|
||||
int64 nLocks; /* total number of times lock is held */
|
||||
int numLockOwners; /* # of relevant ResourceOwners */
|
||||
int maxLockOwners; /* allocated size of array */
|
||||
bool holdsStrongLockCount; /* did we bump FastPathStrongLocks? */
|
||||
LOCALLOCKOWNER *lockOwners; /* dynamically resizable array */
|
||||
} LOCALLOCK;
|
||||
|
||||
@ -419,19 +420,25 @@ typedef struct LOCALLOCK
|
||||
|
||||
|
||||
/*
|
||||
* This struct holds information passed from lmgr internals to the lock
|
||||
* listing user-level functions (in lockfuncs.c). For each PROCLOCK in
|
||||
* the system, copies of the PROCLOCK object and associated PGPROC and
|
||||
* LOCK objects are stored. Note there will often be multiple copies
|
||||
* of the same PGPROC or LOCK --- to detect whether two are the same,
|
||||
* compare the PROCLOCK tag fields.
|
||||
* These structures hold information passed from lmgr internals to the lock
|
||||
* listing user-level functions (in lockfuncs.c).
|
||||
*/
|
||||
|
||||
typedef struct LockInstanceData
|
||||
{
|
||||
LOCKTAG locktag; /* locked object */
|
||||
LOCKMASK holdMask; /* locks held by this PGPROC */
|
||||
LOCKMODE waitLockMode; /* lock awaited by this PGPROC, if any */
|
||||
BackendId backend; /* backend ID of this PGPROC */
|
||||
LocalTransactionId lxid; /* local transaction ID of this PGPROC */
|
||||
int pid; /* pid of this PGPROC */
|
||||
bool fastpath; /* taken via fastpath? */
|
||||
} LockInstanceData;
|
||||
|
||||
typedef struct LockData
|
||||
{
|
||||
int nelements; /* The length of each of the arrays */
|
||||
PROCLOCK *proclocks;
|
||||
PGPROC *procs;
|
||||
LOCK *locks;
|
||||
int nelements; /* The length of the array */
|
||||
LockInstanceData *locks;
|
||||
} LockData;
|
||||
|
||||
|
||||
|
@ -50,6 +50,14 @@ struct XidCache
|
||||
/* flags reset at EOXact */
|
||||
#define PROC_VACUUM_STATE_MASK (0x0E)
|
||||
|
||||
/*
|
||||
* We allow a small number of "weak" relation locks (AccesShareLock,
|
||||
* RowShareLock, RowExclusiveLock) to be recorded in the PGPROC structure
|
||||
* rather than the main lock table. This eases contention on the lock
|
||||
* manager LWLocks. See storage/lmgr/README for additional details.
|
||||
*/
|
||||
#define FP_LOCK_SLOTS_PER_BACKEND 16
|
||||
|
||||
/*
|
||||
* Each backend has a PGPROC struct in shared memory. There is also a list of
|
||||
* currently-unused PGPROC structs that will be reallocated to new backends.
|
||||
@ -137,6 +145,13 @@ struct PGPROC
|
||||
SHM_QUEUE myProcLocks[NUM_LOCK_PARTITIONS];
|
||||
|
||||
struct XidCache subxids; /* cache for subtransaction XIDs */
|
||||
|
||||
/* Per-backend LWLock. Protects fields below. */
|
||||
LWLockId backendLock; /* protects the fields below */
|
||||
|
||||
/* Lock manager data, recording fast-path locks taken by this backend. */
|
||||
uint64 fpLockBits; /* lock modes held for each fast-path slot */
|
||||
Oid fpRelId[FP_LOCK_SLOTS_PER_BACKEND]; /* slots for rel oids */
|
||||
};
|
||||
|
||||
/* NOTE: "typedef struct PGPROC PGPROC" appears in storage/lock.h. */
|
||||
@ -150,6 +165,10 @@ extern PGDLLIMPORT PGPROC *MyProc;
|
||||
*/
|
||||
typedef struct PROC_HDR
|
||||
{
|
||||
/* Array of PGPROC structures (not including dummies for prepared txns) */
|
||||
PGPROC *allProcs;
|
||||
/* Length of allProcs array */
|
||||
uint32 allProcCount;
|
||||
/* Head of list of free PGPROC structures */
|
||||
PGPROC *freeProcs;
|
||||
/* Head of list of autovacuum's free PGPROC structures */
|
||||
@ -163,6 +182,8 @@ typedef struct PROC_HDR
|
||||
int startupBufferPinWaitBufId;
|
||||
} PROC_HDR;
|
||||
|
||||
extern PROC_HDR *ProcGlobal;
|
||||
|
||||
/*
|
||||
* We set aside some extra PGPROC structures for auxiliary processes,
|
||||
* ie things that aren't full-fledged backends but need shmem access.
|
||||
|
@ -1284,7 +1284,7 @@ SELECT viewname, definition FROM pg_views WHERE schemaname <> 'information_schem
|
||||
pg_cursors | SELECT c.name, c.statement, c.is_holdable, c.is_binary, c.is_scrollable, c.creation_time FROM pg_cursor() c(name, statement, is_holdable, is_binary, is_scrollable, creation_time);
|
||||
pg_group | SELECT pg_authid.rolname AS groname, pg_authid.oid AS grosysid, ARRAY(SELECT pg_auth_members.member FROM pg_auth_members WHERE (pg_auth_members.roleid = pg_authid.oid)) AS grolist FROM pg_authid WHERE (NOT pg_authid.rolcanlogin);
|
||||
pg_indexes | SELECT n.nspname AS schemaname, c.relname AS tablename, i.relname AS indexname, t.spcname AS tablespace, pg_get_indexdef(i.oid) AS indexdef FROM ((((pg_index x JOIN pg_class c ON ((c.oid = x.indrelid))) JOIN pg_class i ON ((i.oid = x.indexrelid))) LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace))) LEFT JOIN pg_tablespace t ON ((t.oid = i.reltablespace))) WHERE ((c.relkind = 'r'::"char") AND (i.relkind = 'i'::"char"));
|
||||
pg_locks | SELECT l.locktype, l.database, l.relation, l.page, l.tuple, l.virtualxid, l.transactionid, l.classid, l.objid, l.objsubid, l.virtualtransaction, l.pid, l.mode, l.granted FROM pg_lock_status() l(locktype, database, relation, page, tuple, virtualxid, transactionid, classid, objid, objsubid, virtualtransaction, pid, mode, granted);
|
||||
pg_locks | SELECT l.locktype, l.database, l.relation, l.page, l.tuple, l.virtualxid, l.transactionid, l.classid, l.objid, l.objsubid, l.virtualtransaction, l.pid, l.mode, l.granted, l.fastpath FROM pg_lock_status() l(locktype, database, relation, page, tuple, virtualxid, transactionid, classid, objid, objsubid, virtualtransaction, pid, mode, granted, fastpath);
|
||||
pg_prepared_statements | SELECT p.name, p.statement, p.prepare_time, p.parameter_types, p.from_sql FROM pg_prepared_statement() p(name, statement, prepare_time, parameter_types, from_sql);
|
||||
pg_prepared_xacts | SELECT p.transaction, p.gid, p.prepared, u.rolname AS owner, d.datname AS database FROM ((pg_prepared_xact() p(transaction, gid, prepared, ownerid, dbid) LEFT JOIN pg_authid u ON ((p.ownerid = u.oid))) LEFT JOIN pg_database d ON ((p.dbid = d.oid)));
|
||||
pg_roles | SELECT pg_authid.rolname, pg_authid.rolsuper, pg_authid.rolinherit, pg_authid.rolcreaterole, pg_authid.rolcreatedb, pg_authid.rolcatupdate, pg_authid.rolcanlogin, pg_authid.rolreplication, pg_authid.rolconnlimit, '********'::text AS rolpassword, pg_authid.rolvaliduntil, s.setconfig AS rolconfig, pg_authid.oid FROM (pg_authid LEFT JOIN pg_db_role_setting s ON (((pg_authid.oid = s.setrole) AND (s.setdatabase = (0)::oid))));
|
||||
|
Loading…
Reference in New Issue
Block a user