Submodule | Tarantool



The submodule provides access to information about server instance variables.

  • cluster.uuid is the UUID of the replica set. Every instance in a replica set will have the same cluster.uuid value. This value is also stored in system space.
  • gc() returns the state of the Tarantool garbage collector including the checkpoints and their consumers (users); see details below.
  • id corresponds to (see below).
  • lsn corresponds to replication.lsn (see below).
  • memory() returns the statistics about memory (see below).
  • pid is the process ID. This value is also shown by tarantool module and by the Linux command ps -A.
  • ro is true if the instance is in “read-only” mode (same as read_only in box.cfg{}), or if status is ‘orphan’.
  • signature is the sum of all lsn values from the vector clocks (vclock) of all instances in the replica set.
  • status is the current state of the instance. It can be:
    • running – the instance is loaded,
    • loading – the instance is either recovering xlogs/snapshots or bootstrapping,
    • orphan – the instance has not (yet) succeeded in joining the required number of masters (see orphan status),
    • hot_standby – the instance is standing by another instance.
  • uptime is the number of seconds since the instance started. This value can also be retrieved with tarantool.uptime().
  • uuid corresponds to replication.uuid (see below).
  • vclock corresponds to replication.downstream.vclock (see below).
  • version is the Tarantool version. This value is also shown by tarantool -V.
  • vinyl() returns runtime statistics for the vinyl storage engine. This function is deprecated, use box.stat.vinyl() instead.

The memory function of gives the admin user a picture of the whole Tarantool instance.


To get a picture of the vinyl subsystem, use box.stat.vinyl() instead.

  • memory().cache – number of bytes used for caching user data. The memtx storage engine does not require a cache, so in fact this is the number of bytes in the cache for the tuples stored for the vinyl storage engine.
  • memory().data – number of bytes used for storing user data (the tuples) with the memtx engine and with level 0 of the vinyl engine, without taking memory fragmentation into account.
  • memory().index – number of bytes used for indexing user data, including memtx and vinyl memory tree extents, the vinyl page index, and the vinyl bloom filters.
  • memory().lua – number of bytes used for Lua runtime.
  • memory().net – number of bytes used for network input/output buffers.
  • memory().tx – number of bytes in use by active transactions. For the vinyl storage engine, this is the total size of all allocated objects (struct txv, struct vy_tx, struct vy_read_interval) and tuples pinned for those objects.

An example with a minimum allocation while only the memtx storage engine is in use:

- cache: 0
  data: 6552
  tx: 0
  lua: 1315567
  net: 98304
  index: 1196032

The gc function of gives the admin user a picture of the factors that affect the Tarantool garbage collector. The garbage collector compares vclock (vector clock) values of users and checkpoints, so a look at may show why the garbage collector has not removed old WAL files, or show what it may soon remove.

  • gc().consumers – a list of users whose requests might affect the garbage collector.
  • gc().checkpoints – a list of preserved checkpoints.
  • gc().checkpoints[n].references – a list of references to a checkpoint.
  • gc().checkpoints[n].vclock – a checkpoint’s vclock value.
  • gc().checkpoints[n].signature – a sum of a checkpoint’s vclock’s components.
  • gc().checkpoint_is_in_progress – true if a checkpoint is in progress, otherwise false
  • gc().vclock – the garbage collector’s vclock.
  • gc().signature – the sum of the garbage collector’s checkpoint’s components.

The replication section of is a table array with statistics for all instances in the replica set that the current instance belongs to (see also “Monitoring a replica set”):

In the following, n is the index number of one table item, for example replication[1], which has data about server instance number 1, which may or may not be the same as the current instance (the “current instance” is what is responding to

  • replication[n].id is a short numeric identifier of instance n within the replica set. This value is stored in the system space.
  • replication[n].uuid is a globally unique identifier of instance n. This value is stored in the system space.
  • replication[n].lsn is the log sequence number (LSN) for the latest entry in instance n’s write ahead log (WAL).
  • replication[n].upstream appears (is not nil) if the current instance is following or intending to follow instance n, which ordinarily means replication[n].upstream.status = follow, replication[n].upstream.peer = url of instance n which is being followed, replication[n].lag and idle = the instance’s speed, described later. Another way to say this is: replication[n].upstream will appear when replication[n].upstream.peer is not of the current instance, and is not read-only, and was specified in box.cfg{replication={...}}, so it is shown in box.cfg.replication.
  • replication[n].upstream.status is the replication status of the connection with instance n:
    • auth means that authentication is happening.
    • connecting means that connection is happening.
    • disconnected means that it is not connected to the replica set (due to network problems, not replication errors).
    • follow means that the current instance’s role is “replica” (read-only, or not read-only but acting as a replica for this remote peer in a master-master configuration), and is receiving or able to receive data from instance n’s (upstream) master.
    • stopped means that replication was stopped due to a replication error (for example duplicate key).
    • sync means that the master and replica are synchronizing to have the same data.
  • replication[n].upstream.idle is the time (in seconds) since the last event was received. This is the primary indicator of replication health. See more in Monitoring a replica set.
  • replication[n].upstream.lag is the time difference between the local time of instance n, recorded when the event was received, and the local time at another master recorded when the event was written to the write ahead log on that master. See more in Monitoring a replica set.

  • replication[n].upstream.message contains an error message in case of a degraded state, otherwise it is nil.

  • replication[n].downstream appears (is not nil) with data about an instance that is following instance n or is intending to follow it, which ordinarily means replication[n].downstream.status = follow,

  • replication[n].downstream.vclock contains the vector clock, which is a table of ‘id, lsn’ pairs, for example vclock: {1: 3054773, 4: 8938827, 3: 285902018}. (Notice that the table may have multiple pairs although vclock is a singular name).

    Even if instance n is removed, its values will still appear here; however, its values will be overridden if an instance joins later with the same UUID. Vector clock pairs will only appear if lsn > 0.

    replication[n].downstream.vclock may be the same as the current instance’s vclock ( because this is for all known vclock values of the cluster. A master will know what is in a replica’s copy of vclock because, when the master makes a data change, it sends the change information to the replica (including the master’s vector clock), and the replica replies with what is in its entire vector clock table.

  • replication[n].downstream.idle is the time (in seconds) since the last time that instance n sent events through the downstream replication.

  • replication[n].downstream.status is the replication status for downstream replications:

    • stopped means that downstream replication has stopped,
    • follow means that downstream replication is in progress (instance n is ready to accept data from the master or is currently doing so).
  • replication[n].downstream.message and replication[n].downstream.system_message will be nil unless a problem occurs with the connection. For example, if instance n goes down, then one may see status = 'stopped', message = 'unexpected EOF when reading from socket', and system_message = 'Broken pipe'. See also degraded state.

Since contents are dynamic, it’s not possible to iterate over keys with the Lua pairs() function. For this purpose, builds and returns a Lua table with all keys and values provided in the submodule.

Return:keys and values in the submodule


This example is for a master-replica set that contains one master instance and one replica instance. The request was issued at the replica instance.

- vinyl: []
  version: 2.2.0-482-g8c84932ad
  id: 2
  ro: true
  status: running
  vclock: {1: 9}
  uptime: 356
  lsn: 0
  memory: []
    uuid: e261a5bc-6303-4de3-9873-556f311255e0
  pid: 160
  gc: []
  signature: 9
      id: 1
      uuid: fce71bb3-0e99-40ec-ab7e-5159487e18d1
      lsn: 9
        status: follow
        idle: 0.035709699994186
        peer: replicator@
        lag: 0.00016164779663086
        status: follow
        idle: 0.59840899999836
        vclock: {1: 9}
      id: 2
      uuid: bc4629ce-ea31-4f75-b805-a4807bcacc93
      lsn: 0
  uuid: bc4629ce-ea31-4f75-b805-a4807bcacc93