Configuration
Tarantool provides the ability to configure the full topology of a cluster and set parameters specific for concrete instances, such as connection settings, memory used to store data, logging, and snapshot settings. Each instance uses this configuration during startup to organize the cluster.
There are two approaches to configuring Tarantool:
Since version 3.0: In the YAML format.
YAML configuration allows you to provide the full cluster topology and specify all configuration options. You can use local configuration in a YAML file for each instance or store configuration data in a reliable centralized storage.
In version 2.11 and earlier: In code using the
box.cfg
API.In this case, configuration is provided in a Lua initialization script.
Примечание
Starting with the 3.0 version, configuring Tarantool in code is considered a legacy approach.
YAML configuration describes the full topology of a Tarantool cluster. A cluster’s topology includes the following elements, starting from the lower level:
groups:
group001:
replicasets:
replicaset001:
instances:
instance001:
# ...
instance002:
# ...
instances
An instance represents a single running Tarantool instance. It stores data or might act as a router for handling CRUD requests in a sharded cluster.
replicasets
A replica set is a pack of instances that operate on same data sets. Replication provides redundancy and increases data availability.
groups
A group provides the ability to organize replica sets. For example, in a sharded cluster, one group can contain storage instances and another group can contain routers used to handle CRUD requests.
You can flexibly configure a cluster’s settings on different levels: from global settings applied to all groups to parameters specific for concrete instances.
Примечание
All the available options are documented in the Configuration reference.
This section provides an overview on how to configure Tarantool in a YAML file.
The example below shows a sample configuration of a single Tarantool instance:
groups:
group001:
replicasets:
replicaset001:
instances:
instance001:
iproto:
listen:
- uri: '127.0.0.1:3301'
- The
instances
section includes only one instance named instance001. Theiproto.listen.uri
option sets an address used to listen for incoming requests. - The
replicasets
section contains one replica set named replicaset001. - The
groups
section contains one group named group001.
This section shows how to control a scope the specified configuration option is applied to. Most of the configuration options can be applied to a specific instance, replica set, group, or to all instances globally.
Instance
To apply certain configuration options to a specific instance, specify such options for this instance only. In the example below,
iproto.listen
is applied to instance001 only.groups: group001: replicasets: replicaset001: instances: instance001: iproto: listen: - uri: '127.0.0.1:3301'
Replica set
In this example,
iproto.listen
is in effect for all instances in replicaset001.groups: group001: replicasets: replicaset001: iproto: listen: - uri: '127.0.0.1:3301' instances: instance001: { }
Group
In this example,
iproto.listen
is in effect for all instances in group001.groups: group001: iproto: listen: - uri: '127.0.0.1:3301' replicasets: replicaset001: instances: instance001: { }
Global
In this example,
iproto.listen
is applied to all instances of the cluster.iproto: listen: - uri: '127.0.0.1:3301' groups: group001: replicasets: replicaset001: instances: instance001: { }
Configuration scopes above are listed in the order of their precedence – from highest to lowest. For example, if the same option is defined at the instance and global level, the instance’s value takes precedence over the global one.
Примечание
The Configuration reference contains information about scopes to which each configuration option can be applied.
The example below shows how specific configuration options work in different configuration scopes for a replica set with a manual failover. You can learn more about configuring replication from Replication tutorials.
credentials:
users:
replicator:
password: 'topsecret'
roles: [replication]
iproto:
advertise:
peer:
login: replicator
replication:
failover: manual
groups:
group001:
replicasets:
replicaset001:
leader: instance001
instances:
instance001:
iproto:
listen:
- uri: '127.0.0.1:3301'
instance002:
iproto:
listen:
- uri: '127.0.0.1:3302'
instance003:
iproto:
listen:
- uri: '127.0.0.1:3303'
credentials
(global)This section is used to create the replicator user and assign it the specified role. These options are applied globally to all instances.
iproto
(global, instance)The
iproto
section is specified on both global and instance levels. Theiproto.advertise.peer
option specifies the parameters used by an instance to connect to another instance as a replica, for example, a URI, a login and password, or SSL parameters . In the example above, the option includeslogin
only. An URI is taken fromiproto.listen
that is set on the instance level.replication
(global)The
replication.failover
global option sets a manual failover for all replica sets.leader
(replica set)The
<replicaset-name>.leader
option sets a master instance for replicaset001.
An application role is a Lua module that implements specific functions or logic. You can turn on or off a particular role for certain instances in a configuration without restarting these instances.
There can be built-in Tarantool roles, roles provided by third-party Lua modules, or custom roles that are developed as a part of a cluster application. This section describes how to enable and configure roles. To learn how to develop custom roles, see Application roles.
To turn on or off a role for a specific instance or a set of instances, use the roles configuration option.
The example below shows how to enable the roles.crud-router
role provided by the CRUD module using the roles option:
roles: [ roles.crud-router ]
Similarly, you can enable the roles.crud-storage
role to make instances act as CRUD storages:
roles: [ roles.crud-storage ]
Example on GitHub: sharded_cluster_crud
The roles_cfg option allows you to specify the configuration for each role. In this option, the role name is the key and the role configuration is the value.
The example below shows how to enable statistics on called operations by providing the roles.crud-router
role’s configuration:
roles:
- roles.crud-router
- roles.metrics-export
roles_cfg:
roles.crud-router:
stats: true
stats_driver: metrics
stats_quantiles: true
Example on GitHub: sharded_cluster_crud_metrics
As the most of configuration options, roles and their configurations can be defined at different levels.
Given that the roles
option has the array
type and roles_cfg
has the map
type, there are some specifics of applying the configuration:
For
roles
, an instance’s role takes precedence over roles defined at another level. In the example below,instance001
has onlyrole3
:# ... replicaset001: roles: [ role1, role2 ] instances: instance001: roles: [ role3 ]
Learn more about the order of precedence for different configuration scopes in Configuration scopes.
For
roles_cfg
, the following rules are applied:If a configuration for the same role is provided at different levels, an instance configuration takes precedence over the configuration defined at another level. In the example below,
role1.greeting
is'Hi'
:# ... replicaset001: roles_cfg: role1: greeting: 'Hello' instances: instance001: roles: [ role1 ] roles_cfg: role1: greeting: 'Hi'
If the configurations for different roles are provided at different levels, both configurations are applied at the instance level. In the example below,
instance001
hasrole1.greeting
set to'Hi'
androle2.farewell
set to'Bye'
:# ... replicaset001: roles_cfg: role1: greeting: 'Hi' instances: instance001: roles: [ role1, role2 ] roles_cfg: role2: farewell: 'Bye'
Labels allow adding custom attributes to your cluster configuration. A label is
an arbitrary key: value
pair with a string key and value.
labels:
dc: 'east'
production: 'false'
Labels can be defined in any configuration scope. An instance receives labels from
all scopes it belongs to. The labels
section in a group or a replica set scope
applies to all instances of the group or a replica set. To override these labels on
the instance level or add instance-specific labels, define another labels
section in the instance scope.
groups:
group001:
replicasets:
replicaset001:
labels:
dc: 'east'
production: 'false'
instances:
instance001:
labels:
rack: '10'
production: 'true'
Example on GitHub: labels
To access instance labels from the application code, call the config:get() function:
myapp:instance001> require('config'):get('labels')
---
- production: 'true'
rack: '10'
dc: east
...
Labels can be used to direct function calls to instances that match certain criteria using the connpool module.
In a configuration file, you can use the following predefined variables that are replaced with actual values at runtime:
instance_name
replicaset_name
group_name
To reference these variables in a configuration file, enclose them in double curly braces with whitespaces.
In the example below, {{ instance_name }}
is replaced with instance001.
groups:
group001:
replicasets:
replicaset001:
instances:
instance001:
snapshot:
dir: ./var/{{ instance_name }}/snapshots
wal:
dir: ./var/{{ instance_name }}/wals
As a result, the paths to snapshots and write-ahead logs differ for different instances.
A YAML configuration can include parts that apply only to instances that meet certain conditions. This is useful for cluster upgrade scenarios: during an upgrade, instances can be running different Tarantool versions and therefore require different configurations.
Conditional parts are defined in the conditional configuration section in the global scope.
It includes one or more if
subsections. Each if
subsection defines conditions
and configuration parts that apply to instances that meet these conditions.
The example below shows a conditional
section for cluster upgrade from Tarantool 3.0.0
to Tarantool 3.1.0:
- The user-defined label
upgraded
istrue
on instances that are running Tarantool 3.1.0 or later. On older versions, it isfalse
. - Two compat options that were introduced in 3.1.0 are defined for Tarantool 3.1.0 instances. On older versions, they would cause an error.
conditional:
- if: tarantool_version < 3.1.0
labels:
upgraded: 'false'
- if: tarantool_version >= 3.1.0
labels:
upgraded: 'true'
compat:
box_error_serialize_verbose: 'new'
box_error_unpack_type_and_code: 'new'
Example on GitHub: conditional
if
sections can use one variable – tarantool_version
. It contains
a three-number Tarantool version and compares with values of the same format
using the comparison operators >
, <
, >=
, <=
, ==
, and !=
.
You can write complex conditions using the logical operators ||
(OR) and &&
(AND).
Parentheses ()
can be used to define the operators precedence.
conditional:
- if: (tarantool_version > 3.2.0 || tarantool_version == 3.1.3) && tarantool_version <= 3.99.0
-- < ... >
If the same option is set in multiple if
sections that are true for an instance,
this option receives the value from the section declared last in the configuration.
Example:
conditional:
- if: tarantool_version >= 3.0.0
labels:
version: '3.0' # applies to versions >= 3.0.0 and < 3.1.0
- if: tarantool_version >= 3.1.0
labels:
version: '3.1+' # applies to versions >= 3.1.0
For each configuration parameter, Tarantool provides two sets of predefined environment variables:
TT_<CONFIG_PARAMETER>
. These variables are used to substitute parameters specified in a configuration file. This means that these variables have a higher priority than the options specified in a configuration file.TT_<CONFIG_PARAMETER>_DEFAULT
. These variables are used to specify default values for parameters missing in a configuration file. These variables have a lower priority than the options specified in a configuration file.
For example, TT_IPROTO_LISTEN
and TT_IPROTO_LISTEN_DEFAULT
correspond to the iproto.listen
option.
TT_SNAPSHOT_DIR
and TT_SNAPSHOT_DIR_DEFAULT
correspond to the snapshot.dir
option.
To see all the supported environment variables, execute the tarantool
command with the --help-env-list
option.
$ tarantool --help-env-list
Примечание
There are also special TT_INSTANCE_NAME
and TT_CONFIG
environment variables that can be used to start the specified Tarantool instance with configuration from the given file.
Below are a few examples that show how to set environment variables of different types, like string, number, array, or map.
In this example, TT_LOG_LEVEL
is used to set a logging level to CRITICAL
:
$ export TT_LOG_LEVEL='crit'
In this example, a logging level is set to CRITICAL
using a corresponding numeric value:
$ export TT_LOG_LEVEL=3
The examples below show how to set the TT_SHARDING_ROLES
variable that accepts an array value.
Arrays can be passed in two ways: using a simple …
$ export TT_SHARDING_ROLES=router,storage
… or JSON format:
$ export TT_SHARDING_ROLES='["router", "storage"]'
The simple format is applicable only to arrays containing scalar values.
To assign map values to environment variables, you can also use simple or JSON formats.
In the example below, TT_LOG_MODULES
sets different logging levels for different modules using a simple format:
$ export TT_LOG_MODULES=module1=info,module2=error
In the next example, TT_ROLES_CFG
is used to specify the value of a custom configuration for a role using a JSON format:
$ export TT_ROLES_CFG='{"greeter":{"greeting":"Hello"}}'
The simple format is applicable only to maps containing scalar values.
In the example below, TT_IPROTO_LISTEN
is used to specify a listening host and port values:
$ export TT_IPROTO_LISTEN=['{"uri":"127.0.0.1:3311"}']
You can also pass several listening addresses:
$ export TT_IPROTO_LISTEN=['{"uri":"127.0.0.1:3311"}','{"uri":"127.0.0.1:3312"}']
Enterprise Edition
Centralized configuration storages are supported by the Enterprise Edition only.
Tarantool enables you to store configuration data in one place using a Tarantool or etcd-based storage. To achieve this, you need to:
Set up a centralized configuration storage.
Publish a cluster’s configuration to the storage.
Configure a connection to the storage by providing a local YAML configuration with an endpoint address and key prefix in the
config
section:config: etcd: endpoints: - http://localhost:2379 prefix: /myapp
Learn more from the following guide: Centralized configuration storages.
Tarantool configuration options are applied from multiple sources with the following precedence, from highest to lowest:
TT_*
environment variables.- Configuration from a local YAML file.
- Centralized configuration.
TT_*_DEFAULT
environment variables.
If the same option is defined in two or more locations, the option with the highest precedence is applied.