Configuration reference
This topic describes all configuration parameters provided by Tarantool.
Most of the configuration options described in this reference can be applied to a specific instance, replica set, group, or to all instances globally. To do so, you need to define the required option at the specified level.
Using Tarantool as an application server, you can run your own Lua applications.
In the app section, you can load the application and provide an application configuration in the app.cfg section.
Note
app can be defined in any scope.
Note
Note that an application specified using app is loaded after application roles specified using the roles option.
-
app.cfg¶ A configuration of the application loaded using
app.fileorapp.module.Example
In the example below, the application is loaded from the
myapp.luafile placed next to the YAML configuration file:app: file: 'myapp.lua' cfg: greeting: 'Hello'
Example on GitHub: application
Tip
The experimental.config.utils.schema built-in module provides an API for managing user-defined configurations of applications (
app.cfg) and roles (roles_cfg).Type: mapDefault: nilEnvironment variable: TT_APP_CFG
-
app.file¶ A path to a Lua file to load an application from.
Type: stringDefault: nilEnvironment variable: TT_APP_FILE
-
app.module¶ A Lua module to load an application from.
Example
The
appsection can be placed in any configuration scope. As an example use case, you can provide different applications for storages and routers in a sharded cluster:groups: storages: app: module: storage # ... routers: app: module: router # ...
Type: stringDefault: nilEnvironment variable: TT_APP_MODULE
Enterprise Edition
Configuring audit_log parameters is available in the Enterprise Edition only.
The audit_log section defines configuration parameters related to audit logging.
Note
audit_log can be defined in any scope.
- audit_log.extract_key
- audit_log.file
- audit_log.filter
- audit_log.format
- audit_log.nonblock
- audit_log.pipe
- audit_log.spaces
- audit_log.to
- audit_log.syslog.*
-
audit_log.extract_key¶ If set to
true, the audit subsystem extracts and prints only the primary key instead of full tuples in DML events (space_insert,space_replace,space_delete). Otherwise, full tuples are logged. The option may be useful in case tuples are big.Type: booleanDefault: falseEnvironment variable: TT_AUDIT_LOG_EXTRACT_KEY
-
audit_log.file¶ Specify a file for the audit log destination. You can set the
filetype using the audit_log.to option. If you write logs to a file, Tarantool reopens the audit log at SIGHUP.Type: stringDefault: ‘var/log/{{ instance_name }}/audit.log’Environment variable: TT_AUDIT_LOG_FILE
-
audit_log.filter¶ Enable logging for a specified subset of audit events. This option accepts the following values:
- Event names (for example,
password_change). For details, see Audit log events. - Event groups (for example,
audit). For details, see Event groups.
The option contains either one value from
Possible valuessection (see below) or a combination of them.To enable custom audit log events, specify the
customvalue in this option.Example
filter: [ user_create,data_operations,ddl,custom ]
Type: arrayPossible values: ‘all’, ‘audit’, ‘auth’, ‘priv’, ‘ddl’, ‘dml’, ‘data_operations’, ‘compatibility’, ‘audit_enable’, ‘auth_ok’, ‘auth_fail’, ‘disconnect’, ‘user_create’, ‘user_drop’, ‘role_create’, ‘role_drop’, ‘user_disable’, ‘user_enable’, ‘user_grant_rights’, ‘role_grant_rights’, ‘role_revoke_rights’, ‘password_change’, ‘access_denied’, ‘eval’, ‘call’, ‘space_select’, ‘space_create’, ‘space_alter’, ‘space_drop’, ‘space_insert’, ‘space_replace’, ‘space_delete’, ‘custom’Default: ‘nil’Environment variable: TT_AUDIT_LOG_FILTER- Event names (for example,
-
audit_log.format¶ Specify a format that is used for the audit log.
Example
If you set the option to
plain,audit_log: to: file format: plain
the output in the file might look as follows:
2024-01-17T00:12:27.155+0300 4b5a2624-28e5-4b08-83c7-035a0c5a1db9 INFO remote:unix/:(socket) session_type:console module:tarantool user:admin type:space_create tag: description:Create space Bands
Type: stringPossible values: ‘json’, ‘csv’, ‘plain’Default: ‘json’Environment variable: TT_AUDIT_LOG_FORMAT
-
audit_log.nonblock¶ Specify the logging behavior if the system is not ready to write. If set to
true, Tarantool does not block during logging if the system is non-writable and writes a message instead. Using this value may improve logging performance at the cost of losing some log messages.Note
The option only has an effect if the audit_log.to is set to
syslogorpipe.Type: booleanDefault: falseEnvironment variable: TT_AUDIT_LOG_NONBLOCK
-
audit_log.pipe¶ Specify a pipe for the audit log destination. You can set the
pipetype using the audit_log.to option. If log is a program, its pid is stored in theaudit.pidfield. You need to send it a signal to rotate logs.Example
This starts the cronolog program when the server starts and sends all
audit_logmessages to cronolog standard input (stdin).audit_log: to: pipe pipe: 'cronolog audit_tarantool.log'
Type: stringDefault: box.NULLEnvironment variable: TT_AUDIT_LOG_PIPE
-
audit_log.spaces¶ The array of space names for which data operation events (
space_select,space_insert,space_replace,space_delete) should be logged. The array accepts string values. If set to box.NULL, the data operation events are logged for all spaces.Example
In the example, only the events of
bandsandsingersspaces are logged:audit_log: spaces: [bands, singers]
Type: arrayDefault: box.NULLEnvironment variable: TT_AUDIT_LOG_SPACES
-
audit_log.to¶ Enable audit logging and define the log location. This option accepts the following values:
devnull: disable audit logging.file: write audit logs to a file (see audit_log.file).pipe: start a program and write audit logs to it (see audit_log.pipe).syslog: write audit logs to a system logger (see audit_log.syslog.*).
By default, audit logging is disabled.
Example
The basic audit log configuration might look as follows:
audit_log: to: file file: 'audit_tarantool.log' filter: [ user_create,data_operations,ddl,custom ] format: json spaces: [ bands ] extract_key: true
Type: stringPossible values: ‘devnull’, ‘file’, ‘pipe’, ‘syslog’Default: ‘devnull’Environment variable: TT_AUDIT_LOG_TO
-
audit_log.syslog.facility¶ Specify a system logger keyword that tells syslogd where to send the message. You can enable logging to a system logger using the audit_log.to option.
See also: syslog configuration example
Type: stringPossible values: ‘auth’, ‘authpriv’, ‘cron’, ‘daemon’, ‘ftp’, ‘kern’, ‘lpr’, ‘mail’, ‘news’, ‘security’, ‘syslog’, ‘user’, ‘uucp’, ‘local0’, ‘local1’, ‘local2’, ‘local3’, ‘local4’, ‘local5’, ‘local6’, ‘local7’Default: ‘local7’Environment variable: TT_AUDIT_LOG_SYSLOG_FACILITY
-
audit_log.syslog.identity¶ Specify an application name to show in logs. You can enable logging to a system logger using the audit_log.to option.
See also: syslog configuration example
Type: stringDefault: ‘tarantool’Environment variable: TT_AUDIT_LOG_SYSLOG_IDENTITY
-
audit_log.syslog.server¶ Set a location for the syslog server. It can be a Unix socket path starting with ‘unix:’ or an ipv4 port number. You can enable logging to a system logger using the audit_log.to option.
Example
audit_log: to: syslog syslog: server: 'unix:/dev/log' facility: 'user' identity: 'tarantool_audit'
- audit_log.syslog.server – a syslog server location.
- audit_log.syslog.facility – a system logger keyword that tells syslogd where to send the message. The default value is
local7.- audit_log.syslog.identity – an application name to show in logs. The default value is
tarantool.These options are interpreted as a message for the syslogd program, which runs in the background of any Unix-like platform.
An example of a Tarantool audit log entry in the syslog:
09:32:52 tarantool_audit: {"time": "2024-02-08T09:32:52.190+0300", "uuid": "94454e46-9a0e-493a-bb9f-d59e44a43581", "severity": "INFO", "remote": "unix/:(socket)", "session_type": "console", "module": "tarantool", "user": "admin", "type": "space_create", "tag": "", "description": "Create space bands"}Warning
Above is an example of writing audit logs to a directory shared with the system logs. Tarantool allows this option, but it is not recommended to do this to avoid difficulties when working with audit logs. System and audit logs should be written separately. To do this, create separate paths and specify them.
Type: stringDefault: box.NULLEnvironment variable: TT_AUDIT_LOG_SYSLOG_SERVER
The compat section defines values of the compat module options.
Note
compat can be defined in any scope.
- compat.binary_data_decoding
- compat.box_cfg_replication_sync_timeout
- compat.box_error_serialize_verbose
- compat.box_error_unpack_type_and_code
- compat.box_info_cluster_meaning
- compat.box_session_push_deprecation
- compat.box_space_execute_priv
- compat.box_space_max
- compat.box_tuple_extension
- compat.box_tuple_new_vararg
- compat.c_func_iproto_multireturn
- compat.fiber_channel_close_mode
- compat.fiber_slice_default
- compat.json_escape_forward_slash
- compat.sql_priv
- compat.sql_seq_scan_default
- compat.yaml_pretty_multiline
-
compat.binary_data_decoding¶ Define how to store binary data fields in Lua after decoding:
new: as varbinary objectsold: as plain strings
See also: Decoding binary objects
Type: stringPossible values: ‘new’, ‘old’Default: ‘new’Environment variable: TT_COMPAT_BINARY_DATA_DECODING
-
compat.box_cfg_replication_sync_timeout¶ Set a default replication sync timeout:
new: 0old: 300 seconds
Important
This value is set during the initial
box.cfg{}call and cannot be changed later.See also: Default value for replication_sync_timeout
Type: stringPossible values: ‘new’, ‘old’Default: ‘new’Environment variable: TT_COMPAT_BOX_CFG_REPLICATION_SYNC_TIMEOUT
-
compat.box_error_serialize_verbose¶ Since: 3.1.0
Set the verbosity of error objects serialization:
new: serialize the error message together with other potentially useful fieldsold: serialize only the error message
Type: stringPossible values: ‘new’, ‘old’Default: ‘old’Environment variable: TT_COMPAT_BOX_ERROR_SERIALIZE_VERBOSE
-
compat.box_error_unpack_type_and_code¶ Since: 3.1.0
Whether to show error fields in
box.error.unpack():new: do not showbase_typeandcustom_typefields; do not show thecodefield if it is 0. Note thatbase_typeis still accessible for an error object.old: show all fields
Type: stringPossible values: ‘new’, ‘old’Default: ‘old’Environment variable: TT_COMPAT_BOX_ERROR_UNPACK_TYPE_AND_CODE
-
compat.box_info_cluster_meaning¶ Define the behavior of
box.info.cluster:new: show the entire clusterold:: show the current replica set
See also: Meaning of box.info.cluster
Type: stringPossible values: ‘new’, ‘old’Default: ‘new’Environment variable: TT_COMPAT_BOX_INFO_CLUSTER_MEANING
-
compat.box_session_push_deprecation¶ Whether to raise errors on attempts to call the deprecated function
box.session.push:new: raise an errorold: do not raise an error
See also: box.session.push() deprecation
Type: stringPossible values: ‘new’, ‘old’Default: ‘old’Environment variable: TT_COMPAT_BOX_SESSION_PUSH_DEPRECATION
-
compat.box_space_execute_priv¶ Whether the
executeprivilege can be granted on spaces:new: an error is raisedold: the privilege can be granted with no actual effect
Type: stringPossible values: ‘new’, ‘old’Default: ‘new’Environment variable: TT_COMPAT_BOX_SPACE_EXECUTE_PRIV
-
compat.box_space_max¶ Set the maximum space identifier (
box.schema.SPACE_MAX):new: 2147483646old: 2147483647
The limit was decremented because the old max value is used as an error indicator in the
boxC API.Type: stringPossible values: ‘new’, ‘old’Default: ‘new’Environment variable: TT_COMPAT_BOX_SPACE_MAX
-
compat.box_tuple_extension¶ Controls
IPROTO_FEATURE_CALL_RET_TUPLE_EXTENSIONandIPROTO_FEATURE_CALL_ARG_TUPLE_EXTENSIONfeature bits that define tuple encoding in iprotocallandevalrequests.new: tuples with formats are encoded asMP_TUPLEold: tuples with formats are encoded asMP_ARRAY
Type: stringPossible values: ‘new’, ‘old’Default: ‘new’Environment variable: TT_COMPAT_BOX_TUPLE_EXTENSION
-
compat.box_tuple_new_vararg¶ Controls how
box.tuple.newinterprets an argument list:new: as a value with a tuple formatold: as an array of tuple fields
Type: stringPossible values: ‘new’, ‘old’Default: ‘new’Environment variable: TT_COMPAT_BOX_TUPLE_NEW_VARARG
-
compat.c_func_iproto_multireturn¶ Controls wrapping of multiple results of a stored C function when returning them via iproto:
new: return without wrapping (consistently with a local call viabox.func)old: wrap results into a MessagePack array
Type: stringPossible values: ‘new’, ‘old’Default: ‘new’Environment variable: TT_COMPAT_C_FUNC_IPROTO_MULTIRETURN
-
compat.fiber_channel_close_mode¶ Define the behavior of fiber channels after closing:
new: mark the channel read-onlyold: destroy the channel object
See also: Fiber channel close mode
Type: stringPossible values: ‘new’, ‘old’Default: ‘new’Environment variable: TT_COMPAT_FIBER_CHANNEL_CLOSE_MODE
-
compat.fiber_slice_default¶ Define the maximum fiber execution time without a yield:
new:{warn = 0.5, err = 1.0}old: infinity (no warnings or errors raised).
See also: Default value for max fiber slice
Type: stringPossible values: ‘new’, ‘old’Default: ‘new’Environment variable: TT_COMPAT_FIBER_SLICE_DEFAULT
-
compat.json_escape_forward_slash¶ Whether to escape the forward slash symbol ‘/’ using a backslash in a
json.encode()result:new: do not escape the forward slashold: escape the forward slash
See also: JSON encode escape forward slash
Type: stringPossible values: ‘new’, ‘old’Default: ‘new’Environment variable: TT_COMPAT_JSON_ESCAPE_FORWARD_SLASH
-
compat.sql_priv¶ Whether to enable access checks for SQL requests over iproto:
new: check the user’s access permissionsold: allow any user to execute SQL over iproto
Type: stringPossible values: ‘new’, ‘old’Default: ‘new’Environment variable: TT_COMPAT_SQL_PRIV
-
compat.sql_seq_scan_default¶ Controls the default value of the
sql_seq_scansession setting:new: falseold: true
See also: Default value for sql_seq_scan session setting
Type: stringPossible values: ‘new’, ‘old’Default: ‘new’Environment variable: TT_COMPAT_SQL_SEQ_SCAN_DEFAULT
-
compat.yaml_pretty_multiline¶ Whether to encode in block scalar style all multiline strings or ones containing the
\n\nsubstring:new: all multiline stringsold: only strings containing the\n\nsubstring
See also: Lua-YAML prettier multiline output
Type: stringPossible values: ‘new’, ‘old’Default: ‘new’Environment variable: TT_COMPAT_YAML_PRETTY_MULTILINE
The conditional section defines the configuration parts that apply to instances
that meet certain conditions.
Note
conditional can be defined in the global scope only.
-
conditional.if¶ Specify a conditional section of the configuration. The configuration options defined inside a
conditional.ifsection apply only to instances on which the specified condition is true.Conditions can include one variable –
tarantool_version: a three-number Tarantool version running on the instance, for example, 3.1.0. It compares to version literal values that include three numbers separated by periods (x.y.z).The following operators are available in conditions:
- comparison:
>,<,>=,<=,==,!= - logical operators
||(OR) and&&(AND) - parentheses
()
Example:
In this example, different configuration parts apply to instances running Tarantool versions above and below 3.1.0:
- On versions less than 3.1.0, the
upgradedlabel is set tofalse. - On versions 3.1.0 or newer, the
upgradedlabel is set totrue. Additionally, newcompatoptions are defined. These options were introduced in version 3.1.0, so on older versions they would cause an error.
conditional: - if: tarantool_version < 3.1.0 labels: upgraded: 'false' - if: tarantool_version >= 3.1.0 labels: upgraded: 'true' compat: box_error_serialize_verbose: 'new' box_error_unpack_type_and_code: 'new'
See also: Conditional configuration sections
- comparison:
The config section defines various parameters related to centralized configuration.
Note
config can be defined in the global scope only.
-
config.reload¶ Specify how the configuration is reloaded. This option accepts the following values:
auto: configuration is reloaded automatically when it is changed.manual: configuration should be reloaded manually. In this case, you can reload the configuration in the application code using config:reload().
See also: Reloading configuration
Type: stringPossible values: ‘auto’, ‘manual’Default: ‘auto’Environment variable: TT_CONFIG_RELOAD
This section describes options related to loading configuration settings from external storage such as external files or environment variables.
-
config.context¶ Specify how to load settings from external storage. For example, this option can be used to load passwords from safe storage. You can find examples in the Loading secrets from safe storage section.
Type: mapDefault: nilEnvironment variable: TT_CONFIG_CONTEXT
-
config.context.<name>¶ The name of an entity that identifies a configuration value to load.
-
config.context.<name>.env¶ The name of an environment variable to load a configuration value from. To load a configuration value from an environment variable, set config.context.<name>.from to
env.Example
In this example, passwords are loaded from the
DBADMIN_PASSWORDandSAMPLEUSER_PASSWORDenvironment variables:config: context: dbadmin_password: from: env env: DBADMIN_PASSWORD sampleuser_password: from: env env: SAMPLEUSER_PASSWORD
See also: Loading secrets from safe storage
-
config.context.<name>.from¶ The type of storage to load a configuration value from. There are the following storage types:
file: load a configuration value from a file. In this case, you need to specify the path to the file using config.context.<name>.file.env: load a configuration value from an environment variable. In this case, specify the environment variable name using config.context.<name>.env.
-
config.context.<name>.file¶ The path to a file to load a configuration value from. To load a configuration value from a file, set config.context.<name>.from to
file.Example
In this example, passwords are loaded from the
dbadmin_password.txtandsampleuser_password.txtfiles:config: context: dbadmin_password: from: file file: secrets/dbadmin_password.txt rstrip: true sampleuser_password: from: file file: secrets/sampleuser_password.txt rstrip: true
See also: Loading secrets from safe storage
-
config.context.<name>.rstrip¶ (Optional) Whether to strip whitespace characters and newlines from the end of data.
Enterprise Edition
Centralized configuration storages are supported by the Enterprise Edition only.
This section describes options related to providing connection settings to a centralized etcd-based storage.
If replication.failover is set to supervised, Tarantool also uses etcd to maintain the state of failover coordinators.
Note
Note that a centralized cluster configuration cannot contain the config.etcd section.
- config.etcd.endpoints
- config.etcd.prefix
- config.etcd.username
- config.etcd.password
- config.etcd.ssl.ca_file
- config.etcd.ssl.ca_path
- config.etcd.ssl.ssl_cert
- config.etcd.ssl.ssl_key
- config.etcd.ssl.verify_host
- config.etcd.ssl.verify_peer
- config.etcd.http.request.timeout
- config.etcd.http.request.unix_socket
- config.etcd.watchers.reconnect_max_attempts
- config.etcd.watchers.reconnect_timeout
-
config.etcd.endpoints¶ The list of endpoints used to access an etcd server.
See also: Configuring connection to an etcd storage
Type: arrayDefault: nilEnvironment variable: TT_CONFIG_ETCD_ENDPOINTS
-
config.etcd.prefix¶ A key prefix used to search a configuration on an etcd server. Tarantool searches keys by the following path:
<prefix>/config/*. Note that<prefix>should start with a slash (/).See also: Configuring connection to an etcd storage
Type: stringDefault: nilEnvironment variable: TT_CONFIG_ETCD_PREFIX
-
config.etcd.username¶ A username used for authentication.
See also: Configuring connection to an etcd storage
Type: stringDefault: nilEnvironment variable: TT_CONFIG_ETCD_USERNAME
-
config.etcd.password¶ A password used for authentication.
See also: Configuring connection to an etcd storage
Type: stringDefault: nilEnvironment variable: TT_CONFIG_ETCD_PASSWORD
-
config.etcd.ssl.ca_file¶ A path to a trusted certificate authorities (CA) file.
Type: stringDefault: nilEnvironment variable: TT_CONFIG_ETCD_SSL_CA_FILE
-
config.etcd.ssl.ca_path¶ A path to a directory holding certificates to verify the peer with.
Type: stringDefault: nilEnvironment variable: TT_CONFIG_ETCD_SSL_CA_PATH
-
config.etcd.ssl.ssl_cert¶ Since: 3.2.0
A path to an SSL certificate file.
Type: stringDefault: nilEnvironment variable: TT_CONFIG_ETCD_SSL_SSL_CERT
-
config.etcd.ssl.ssl_key¶ A path to a private SSL key file.
Type: stringDefault: nilEnvironment variable: TT_CONFIG_ETCD_SSL_SSL_KEY
-
config.etcd.ssl.verify_host¶ Enable verification of the certificate’s name (CN) against the specified host.
Type: booleanDefault: nilEnvironment variable: TT_CONFIG_ETCD_SSL_VERIFY_HOST
-
config.etcd.ssl.verify_peer¶ Enable verification of the peer’s SSL certificate.
Type: booleanDefault: nilEnvironment variable: TT_CONFIG_ETCD_SSL_VERIFY_PEER
-
config.etcd.http.request.timeout¶ A time period required to process an HTTP request to an etcd server: from sending a request to receiving a response.
See also: Configuring connection to an etcd storage
Type: numberDefault: nilEnvironment variable: TT_CONFIG_ETCD_HTTP_REQUEST_TIMEOUT
-
config.etcd.http.request.unix_socket¶ A Unix domain socket used to connect to an etcd server.
Type: stringDefault: nilEnvironment variable: TT_CONFIG_ETCD_HTTP_REQUEST_UNIX_SOCKET
Enterprise Edition
Centralized configuration storages are supported by the Enterprise Edition only.
This section describes options related to providing connection settings to a centralized Tarantool-based storage.
Note
Note that a centralized cluster configuration cannot contain the config.storage section.
- config.storage.endpoints
- config.storage.prefix
- config.storage.reconnect_after
- config.storage.timeout
-
config.storage.endpoints¶ An array of endpoints used to access a configuration storage. Each endpoint can include the following fields:
uri: a URI of the configuration storage’s instance.login: a username used to connect to the instance.password: a password used for authentication.params: SSL parameters required for encrypted connections (<uri>.params.*).
See also: Configuring connection to a Tarantool storage
Type: arrayDefault: nilEnvironment variable: TT_CONFIG_STORAGE_ENDPOINTS
-
config.storage.prefix¶ A key prefix used to search a configuration in a centralized configuration storage. Tarantool searches keys by the following path:
<prefix>/config/*. Note that<prefix>should start with a slash (/).See also: Configuring connection to a Tarantool storage
Type: stringDefault: nilEnvironment variable: TT_CONFIG_STORAGE_PREFIX
-
config.storage.reconnect_after¶ A number of seconds to wait before reconnecting to a configuration storage.
Type: numberDefault: 3Environment variable: TT_CONFIG_STORAGE_RECONNECT_AFTER
-
config.storage.timeout¶ The interval (in seconds) to perform the status check of a configuration storage.
See also: Configuring connection to a Tarantool storage
Type: numberDefault: 3Environment variable: TT_CONFIG_STORAGE_TIMEOUT
Configure the administrative console. A client to the console is tt connect.
Note
console can be defined in any scope.
-
console.enabled¶ Whether to listen on the Unix socket provided in the console.socket option.
If the option is set to
false, the administrative console is disabled.Type: booleanDefault: trueEnvironment variable: TT_CONSOLE_ENABLED
-
console.socket¶ The Unix socket for the administrative console.
Mind the following nuances:
- Only a Unix domain socket is allowed. A TCP socket can’t be configured this way.
console.socketis a file path, without anyunix:orunix/:prefixes.- If the file path is a relative path, it is interpreted relative to process.work_dir.
Type: stringDefault: ‘var/run/{{ instance_name }}/tarantool.control’Environment variable: TT_CONSOLE_SOCKET
The credentials section allows you to create users and grant them the specified privileges.
Learn more in Credentials.
Note
credentials can be defined in any scope.
-
credentials.roles¶ An array of roles that can be granted to users or other roles.
Example
In the example below, the
writers_space_readerrole gets privileges to select data in thewritersspace:roles: writers_space_reader: privileges: - permissions: [ read ] spaces: [ writers ]
See also: Managing users and roles
Type: mapDefault: nilEnvironment variable: TT_CREDENTIALS_ROLES
-
credentials.roles.<role_name>.privileges¶ An array of privileges granted to this role.
-
credentials.users¶ An array of users.
Example
In this example,
sampleusergets the following privileges:- Privileges granted to the
writers_space_readerrole. - Privileges to select and modify data in the
booksspace.
sampleuser: password: '123456' roles: [ writers_space_reader ] privileges: - permissions: [ read, write ] spaces: [ books ]
See also: Managing users and roles
Type: mapDefault: nilEnvironment variable: TT_CREDENTIALS_USERS- Privileges granted to the
-
credentials.users.<username>.password¶ A user’s password.
Example
In the example below, a password for the
dbadminuser is set:credentials: users: dbadmin: password: 'T0p_Secret_P@$$w0rd'
See also: Loading secrets from safe storage
-
credentials.users.<username>.privileges¶ An array of privileges granted to this user.
-
<user_or_role_name>.privileges¶ Privileges that can be granted to a user or role using the following options:
-
<user_or_role_name>.privileges.permissions¶ Permissions assigned to this user or a user with this role.
Example
In this example,
sampleusergets privileges to select and modify data in thebooksspace:sampleuser: password: '123456' roles: [ writers_space_reader ] privileges: - permissions: [ read, write ] spaces: [ books ]
See also: Managing users and roles
-
<user_or_role_name>.privileges.spaces¶ Spaces to which this user or a user with this role gets the specified permissions.
Example
In this example,
sampleusergets privileges to select and modify data in thebooksspace:sampleuser: password: '123456' roles: [ writers_space_reader ] privileges: - permissions: [ read, write ] spaces: [ books ]
See also: Managing users and roles
-
<user_or_role_name>.privileges.functions¶ Functions to which this user or a user with this role gets the specified permissions.
-
<user_or_role_name>.privileges.sequences¶ Sequences to which this user or a user with this role gets the specified permissions.
-
<user_or_role_name>.privileges.lua_eval¶ Whether this user or a user with this role can execute arbitrary Lua code.
-
<user_or_role_name>.privileges.lua_call¶ A list of global user-defined Lua functions that this user or a user with this role can call. To allow calling a specific function, specify its name as the value. To allow calling all global Lua functions except built-in ones functions, specify the
allvalue.This option should be configured together with the
executepermission.Since version 3.3.0, the
lua_calloption allows granting users privileges to call specified lua function on the instance in runtime (thus it doesn’t require an ability to write to the database).Example to grant custom functions to the ‘alice’ user:
credentials: users: alice: privileges: - permissions: [execute] lua_call: [my_func, my_func2]
-
<user_or_role_name>.privileges.sql¶ Whether this user or a user with this role can execute an arbitrary SQL expression.
The database section defines database-specific configuration parameters, such as an instance’s read-write mode or transaction isolation level.
Note
database can be defined in any scope.
- database.hot_standby
- database.instance_uuid
- database.mode
- database.replicaset_uuid
- database.txn_isolation
- database.txn_timeout
- database.use_mvcc_engine
-
database.hot_standby¶ Whether to start the server in the hot standby mode. This mode can be used to provide failover without replication.
Suppose there are two cluster applications. Each cluster has one instance with the same configuration:
groups: group001: replicasets: replicaset001: instances: instance001: database: hot_standby: true wal: dir: /tmp/wals snapshot: dir: /tmp/snapshots iproto: listen: - uri: '127.0.0.1:3301'
In particular, both instances use the same directory for storing write-ahead logs and snapshots.
When you start both cluster applications on the same machine, the instance from the first one will be the primary instance and the second will be the standby instance. In the logs of the second cluster instance, you should see a notification:
main/104/interactive I> Entering hot standby mode
This means that the standby instance is ready to take over if the primary instance goes down. The standby instance initializes and tries to take a lock on a directory for storing write-ahead logs but fails because the primary instance has made a lock on this directory.
If the primary instance goes down for any reason, the lock is released. In this case, the standby instance succeeds in taking the lock and becomes the primary instance.
database.hot_standbyhas no effect:- If wal.mode is set to
none. - If wal.dir_rescan_delay is set to a large value on macOS or FreeBSD. On these platforms, the hot standby mode is designed so that the loop repeats every
wal.dir_rescan_delayseconds. - For spaces created with engine set to
vinyl.
Examples on GitHub: hot_standby_1, hot_standby_2
Type: booleanDefault: falseEnvironment variable: TT_DATABASE_HOT_STANDBY- If wal.mode is set to
-
database.instance_uuid¶ An instance UUID.
By default, instance UUIDs are generated automatically.
database.instance_uuidcan be used to specify an instance identifier manually.UUIDs should follow these rules:
- The values must be true unique identifiers, not shared by other instances or replica sets within the common infrastructure.
- The values must be used consistently, not changed after the initial setup. The initial values are stored in snapshot files and are checked whenever the system is restarted.
- The values must comply with RFC 4122. The nil UUID is not allowed.
See also: database.replicaset_uuid
-
database.mode¶ An instance’s operating mode. This option is in effect if replication.failover is set to
off.The following modes are available:
rw: an instance is in read-write mode.ro: an instance is in read-only mode.
If not specified explicitly, the default value depends on the number of instances in a replica set. For a single instance, the
rwmode is used, while for multiple instances, theromode is used.Example
You can set the
database.modeoption torwon all instances in a replica set to make a master-master configuration. In this case,replication.failovershould be set tooff.credentials: users: replicator: password: 'topsecret' roles: [replication] iproto: advertise: peer: login: replicator replication: failover: off groups: group001: replicasets: replicaset001: instances: instance001: database: mode: rw iproto: listen: - uri: '127.0.0.1:3301' instance002: database: mode: rw iproto: listen: - uri: '127.0.0.1:3302' # Load sample data app: file: 'myapp.lua'
Type: stringDefault: box.NULL (the actual default value depends on the number of instances in a replica set)Environment variable: TT_DATABASE_MODE
-
database.replicaset_uuid¶ -
By default, replica set UUIDs are generated automatically.
database.replicaset_uuidcan be used to specify a replica set identifier manually.See also: database.instance_uuid
-
database.txn_isolation¶ A transaction isolation level.
Type: stringDefault:best-effortPossible values:best-effort,read-committed,read-confirmedEnvironment variable: TT_DATABASE_TXN_ISOLATION
-
database.txn_timeout¶ A timeout (in seconds) after which the transaction is rolled back.
See also: box.begin()
Type: numberDefault: 3153600000 (~100 years)Environment variable: TT_DATABASE_TXN_TIMEOUT
-
database.use_mvcc_engine¶ Whether the transactional manager is enabled.
Type: booleanDefault: falseEnvironment variable: TT_DATABASE_USE_MVCC_ENGINE
The failover section defines parameters related to a supervised failover.
Note
failover can be defined in the global scope only.
- failover.log.to
- failover.log.file
- failover.call_timeout
- failover.connect_timeout
- failover.lease_interval
- failover.probe_interval
- failover.renew_interval
- failover.stateboard.*
-
failover.log.to¶ Since: 3.3.0
Enterprise Edition
Configuring
failover.log.toandfailover.log.fileparameters is available in the Enterprise Edition only.Define a location Tarantool sends failover logs to. This option accepts the following values:
stderr: write logs to the standard error stream.file: write logs to a file (see failover.log.file).
Type: stringDefault: ‘stderr’Environment variable: TT_FAILOVER_LOG_TO
-
failover.log.file¶ Since: 3.3.0
Specify a file for failover logs destination. To write logs to a file, set failover.log.to to
file. Otherwise,failover.log.fileis ignored.Example
The example below shows how to write failover logs to a file placed in the specified directory:
failover: log: to: file file: var/log/failover.log
Type: stringDefault: nilEnvironment variable: TT_FAILOVER_LOG_FILE
-
failover.call_timeout¶ Since: 3.1.0
A call timeout (in seconds) for monitoring and failover requests to an instance.
Type: numberDefault: 1Environment variable: TT_FAILOVER_CALL_TIMEOUT
-
failover.connect_timeout¶ Since: 3.1.0
A connection timeout (in seconds) for monitoring and failover requests to an instance.
Type: numberDefault: 1Environment variable: TT_FAILOVER_CONNECT_TIMEOUT
-
failover.lease_interval¶ Since: 3.1.0
A time interval (in seconds) that specifies how long an instance should be a leader without renew requests from a coordinator. When this interval expires, the leader switches to read-only mode. This action is performed by the instance itself and works even if there is no connectivity between the instance and the coordinator.
Type: numberDefault: 30Environment variable: TT_FAILOVER_LEASE_INTERVAL
-
failover.probe_interval¶ Since: 3.1.0
A time interval (in seconds) that specifies how often a monitoring service of the failover coordinator polls an instance for its status.
Type: numberDefault: 10Environment variable: TT_FAILOVER_PROBE_INTERVAL
-
failover.renew_interval¶ Since: 3.1.0
A time interval (in seconds) that specifies how often a failover coordinator sends read-write deadline renewals.
Type: numberDefault: 10Environment variable: TT_FAILOVER_RENEW_INTERVAL
failover.stateboard.* options define configuration parameters related to maintaining the state of failover coordinators in a remote etcd-based storage.
See also: Active and passive coordinators
-
failover.stateboard.keepalive_interval¶ Since: 3.1.0
A time interval (in seconds) that specifies how long a transient state information is stored and how quickly a lock expires.
Note
failover.stateboard.keepalive_intervalshould be smaller than failover.lease_interval. Otherwise, switching of a coordinator causes a replica set leader to go to read-only mode for some time.Type: numberDefault: 10Environment variable: TT_FAILOVER_STATEBOARD_KEEPALIVE_INTERVAL
-
failover.stateboard.renew_interval¶ Since: 3.1.0
A time interval (in seconds) that specifies how often a failover coordinator writes its state information to etcd. This option also determines the frequency at which an active coordinator reads new commands from etcd.
Type: numberDefault: 2Environment variable: TT_FAILOVER_STATEBOARD_RENEW_INTERVAL
The feedback section describes configuration parameters for sending information about a running Tarantool instance to the specified feedback server.
Note
feedback can be defined in any scope.
- feedback.crashinfo
- feedback.enabled
- feedback.host
- feedback.interval
- feedback.metrics_collect_interval
- feedback.metrics_limit
- feedback.send_metrics
-
feedback.crashinfo¶ Whether to send crash information in the case of an instance failure. This information includes:
- General information from the
unameoutput. - Build information.
- The crash reason.
- The stack trace.
To turn off sending crash information, set this option to
false.Type: booleanDefault: trueEnvironment variable: TT_FEEDBACK_CRASHINFO- General information from the
-
feedback.enabled¶ Whether to send information about a running instance to the feedback server. To turn off sending feedback, set this option to
false.Type: booleanDefault: trueEnvironment variable: TT_FEEDBACK_ENABLED
-
feedback.host¶ The address to which information is sent.
-
feedback.interval¶ The interval (in seconds) of sending information.
Type: numberDefault: 3600Environment variable: TT_FEEDBACK_INTERVAL
-
feedback.metrics_collect_interval¶ The interval (in seconds) for collecting metrics.
Type: numberDefault: 60Environment variable: TT_FEEDBACK_METRICS_COLLECT_INTERVAL
-
feedback.metrics_limit¶ The maximum size of memory (in bytes) used to store metrics before sending them to the feedback server. If the size of collected metrics exceeds this value, earlier metrics are dropped.
Type: integerDefault: 1024 * 1024 (1048576)Environment variable: TT_FEEDBACK_METRICS_LIMIT
The fiber section describes options related to configuring fibers, yields, and cooperative multitasking.
Note
fiber can be defined in any scope.
- fiber.io_collect_interval
- fiber.too_long_threshold
- fiber.worker_pool_threads
- fiber.slice.*
- fiber.top.*
-
fiber.io_collect_interval¶ The time period (in seconds) a fiber sleeps between iterations of the event loop.
fiber.io_collect_intervalcan be used to reduce CPU load in deployments where the number of client connections is large, but requests are not so frequent (for example, each connection issues just a handful of requests per second).Type: numberDefault: box.NULLEnvironment variable: TT_FIBER_IO_COLLECT_INTERVAL
-
fiber.too_long_threshold¶ If processing a request takes longer than the given period (in seconds), the fiber warns about it in the log.
fiber.too_long_thresholdhas effect only if log.level is greater than or equal to 4 (warn).Type: numberDefault: 0.5Environment variable: TT_FIBER_TOO_LONG_THRESHOLD
-
fiber.worker_pool_threads¶ The maximum number of threads to use during execution of certain internal processes (for example, socket.getaddrinfo() and coio_call()).
Type: numberDefault: 4Environment variable: TT_FIBER_WORKER_POOL_THREADS
This section describes options related to configuring time periods for fiber slices. See fiber.set_max_slice for details and examples.
-
fiber.slice.warn¶ Set a time period (in seconds) that specifies the warning slice.
Type: numberDefault: 0.5Environment variable: TT_FIBER_SLICE_WARN
-
fiber.slice.err¶ Set a time period (in seconds) that specifies the error slice.
Type: numberDefault: 1Environment variable: TT_FIBER_SLICE_ERR
This section describes options related to configuring the
fiber.top() function, normally used for debug purposes.
fiber.top() shows all alive fibers and their CPU consumption.
-
fiber.top.enabled¶ Enable or disable the
fiber.top()function.Enabling
fiber.top()slows down fiber switching by about 15%, so it is disabled by default.Type: booleanDefault: falseEnvironment variable: TT_FIBER_TOP_ENABLED
Enterprise Edition
Configuring flightrec parameters is available in the Enterprise Edition only.
The flightrec section describes options related to the flight recorder configuration.
Note
flightrec can be defined in any scope.
- flightrec.enabled
- flightrec.logs_size
- flightrec.logs_max_msg_size
- flightrec.logs_log_level
- flightrec.metrics_period
- flightrec.metrics_interval
- flightrec.requests_size
- flightrec.requests_max_req_size
- flightrec.requests_max_res_size
-
flightrec.enabled¶ Enable the flight recorder.
Type: booleanDefault: falseEnvironment variable: TT_FLIGHTREC_ENABLED
-
flightrec.logs_size¶ Specify the size (in bytes) of the log storage. You can set this option to
0to disable the log storage.Type: integerDefault: 10485760Environment variable: TT_FLIGHTREC_LOGS_SIZE
-
flightrec.logs_max_msg_size¶ Specify the maximum size (in bytes) of the log message. The log message is truncated if its size exceeds this limit.
Type: integerDefault: 4096Maximum: 16384Environment variable: TT_FLIGHTREC_LOGS_MAX_MSG_SIZE
-
flightrec.logs_log_level¶ Specify the level of detail the log has. The default value is 6 (
VERBOSE). You can learn more about log levels from the log_level option description. Note that theflightrec.logs_log_levelvalue might differ fromlog_level.Type: integerDefault: 6Environment variable: TT_FLIGHTREC_LOGS_LOG_LEVEL
-
flightrec.metrics_period¶ Specify the time period (in seconds) that defines how long metrics are stored from the moment of dump. So, this value defines how much historical metrics data is collected up to the moment of crash. The frequency of metric dumps is defined by flightrec.metrics_interval.
Type: integerDefault: 180Environment variable: TT_FLIGHTREC_METRICS_PERIOD
-
flightrec.metrics_interval¶ Specify the time interval (in seconds) that defines the frequency of dumping metrics. This value shouldn’t exceed flightrec.metrics_period.
Type: numberDefault: 1.0Minimum: 0.001Environment variable: TT_FLIGHTREC_METRICS_INTERVALNote
Given that the average size of a metrics entry is 2 kB, you can estimate the size of the metrics storage as follows:
(flightrec_metrics_period / flightrec_metrics_interval) * 2 kB
-
flightrec.requests_size¶ Specify the size (in bytes) of storage for the request and response data. You can set this parameter to
0to disable a storage of requests and responses.Type: integerDefault: 10485760Environment variable: TT_FLIGHTREC_REQUESTS_SIZE
-
flightrec.requests_max_req_size¶ Specify the maximum size (in bytes) of a request entry. A request entry is truncated if this size is exceeded.
Type: integerDefault: 16384Environment variable: TT_FLIGHTREC_REQUESTS_MAX_REQ_SIZE
-
flightrec.requests_max_res_size¶ Specify the maximum size (in bytes) of a response entry. A response entry is truncated if this size is exceeded.
Type: integerDefault: 16384Environment variable: TT_FLIGHTREC_REQUESTS_MAX_RES_SIZE
The iproto section is used to configure parameters related to communicating to and between cluster instances.
Note
iproto can be defined in any scope.
-
iproto.listen¶ An array of URIs used to listen for incoming requests. If required, you can enable SSL for specific URIs by providing additional parameters (<uri>.params.*).
Note that a URI value can’t contain parameters, a login, or a password.
Example
In the example below,
iproto.listenis set explicitly for each instance in a cluster:groups: group001: replicasets: replicaset001: instances: instance001: iproto: listen: - uri: '127.0.0.1:3301' instance002: iproto: listen: - uri: '127.0.0.1:3302' instance003: iproto: listen: - uri: '127.0.0.1:3303'
See also: Connections
Type: arrayDefault: nilEnvironment variable: TT_IPROTO_LISTEN
-
iproto.net_msg_max¶ To handle messages, Tarantool allocates fibers. To prevent fiber overhead from affecting the whole system, Tarantool restricts how many messages the fibers handle, so that some pending requests are blocked.
- On powerful systems, increase
net_msg_max, and the scheduler starts processing pending requests immediately. - On weaker systems, decrease
net_msg_max, and the overhead may decrease. However, this may take some time because the scheduler must wait until already-running requests finish.
When
net_msg_maxis reached, Tarantool suspends processing of incoming packages until it has processed earlier messages. This is not a direct restriction of the number of fibers that handle network messages, rather it is a system-wide restriction of channel bandwidth. This in turn restricts the number of incoming network messages that the transaction processor thread handles, and therefore indirectly affects the fibers that handle network messages.Note
The number of fibers is smaller than the number of messages because messages can be released as soon as they are delivered, while incoming requests might not be processed until some time after delivery.
Type: integerDefault: 768Environment variable: TT_IPROTO_NET_MSG_MAX- On powerful systems, increase
-
iproto.readahead¶ The size of the read-ahead buffer associated with a client connection. The larger the buffer, the more memory an active connection consumes, and the more requests can be read from the operating system buffer in a single system call.
The recommendation is to make sure that the buffer can contain at least a few dozen requests. Therefore, if a typical tuple in a request is large, e.g. a few kilobytes or even megabytes, the read-ahead buffer size should be increased. If batched request processing is not used, it’s prudent to leave this setting at its default.
Type: integerDefault: 16320Environment variable: TT_IPROTO_READAHEAD
-
iproto.threads¶ The number of network threads. There can be unusual workloads where the network thread is 100% loaded and the transaction processor thread is not, so the network thread is a bottleneck. In that case, set
iproto_threadsto 2 or more. The operating system kernel determines which connection goes to which thread.Type: integerDefault: 1Environment variable: TT_IPROTO_THREADS
- iproto.advertise.client
- iproto.advertise.peer
- iproto.advertise.sharding
- iproto.advertise.<peer_or_sharding>.*
-
iproto.advertise.client¶ A URI used to advertise the current instance to clients.
The
iproto.advertise.clientoption accepts a URI in the following formats:- An address:
host:port. - A Unix domain socket:
unix/:.
Note that this option doesn’t allow to set a username and password. If a remote client needs this information, it should be delivered outside of the cluster configuration.
Note
The host value cannot be
0.0.0.0/[::]and the port value cannot be0.- An address:
-
iproto.advertise.peer¶ Settings used to advertise the current instance to other cluster members. The format of these settings is described in iproto.advertise.<peer_or_sharding>.*.
Example
In the example below, the following configuration options are specified:
- In the credentials section, the
replicatoruser with thereplicationrole is created. iproto.advertise.peerspecifies that other instances should connect to an address defined in iproto.listen using thereplicatoruser.
credentials: users: replicator: password: 'topsecret' roles: [replication] iproto: advertise: peer: login: replicator replication: failover: election groups: group001: replicasets: replicaset001: instances: instance001: iproto: listen: - uri: '127.0.0.1:3301' instance002: iproto: listen: - uri: '127.0.0.1:3302' instance003: iproto: listen: - uri: '127.0.0.1:3303'
- In the credentials section, the
-
iproto.advertise.sharding¶ Settings used to advertise the current instance to a router and rebalancer. The format of these settings is described in iproto.advertise.<peer_or_sharding>.*.
Note
If
iproto.advertise.shardingis not specified, advertise settings from iproto.advertise.peer are used.Example
In the example below, the following configuration options are specified:
- In the credentials section, the
replicatorandstorageusers are created. iproto.advertise.peerspecifies that other instances should connect to an address defined in iproto.listen with thereplicatoruser.iproto.advertise.shardingspecifies that a router should connect to storages using an address defined in iproto.listen with thestorageuser.
credentials: users: replicator: password: 'topsecret' roles: [replication] storage: password: 'secret' roles: [sharding] iproto: advertise: peer: login: replicator sharding: login: storage
- In the credentials section, the
- iproto.advertise.<peer_or_sharding>.uri
- iproto.advertise.<peer_or_sharding>.login
- iproto.advertise.<peer_or_sharding>.password
- iproto.advertise.<peer_or_sharding>.params
-
iproto.advertise.<peer_or_sharding>.uri¶ (Optional) A URI used to advertise the current instance. By default, the URI defined in iproto.listen is used to advertise the current instance.
Note
The host value cannot be
0.0.0.0/[::]and the port value cannot be0.Type: stringDefault: nilEnvironment variable: TT_IPROTO_ADVERTISE_PEER_URI, TT_IPROTO_ADVERTISE_SHARDING_URI
-
iproto.advertise.<peer_or_sharding>.login¶ (Optional) A username used to connect to the current instance. If a username is not set, the
guestuser is used.Type: stringDefault: nilEnvironment variable: TT_IPROTO_ADVERTISE_PEER_LOGIN, TT_IPROTO_ADVERTISE_SHARDING_LOGIN
-
iproto.advertise.<peer_or_sharding>.password¶ (Optional) A password for the specified user. If a
loginis specified but a password is missing, it is taken from the user’s credentials.Type: stringDefault: nilEnvironment variable: TT_IPROTO_ADVERTISE_PEER_PASSWORD, TT_IPROTO_ADVERTISE_SHARDING_PASSWORD
-
iproto.advertise.<peer_or_sharding>.params¶ (Optional) URI parameters (<uri>.params.*) required for connecting to the current instance.
Enterprise Edition
TLS traffic encryption is supported by the Enterprise Edition only.
URI parameters that can be used in the iproto.listen.<uri>.params and iproto.advertise.<peer_or_sharding>.params options.
- <uri>.params.transport
- <uri>.params.ssl_ca_file
- <uri>.params.ssl_cert_file
- <uri>.params.ssl_ciphers
- <uri>.params.ssl_key_file
- <uri>.params.ssl_password
- <uri>.params.ssl_password_file
Note
Note that <uri>.params.* options don’t have corresponding environment variables for URIs specified in iproto.listen.
-
<uri>.params.transport¶ Allows you to enable traffic encryption for client-server communications over binary connections. In a Tarantool cluster, one instance might act as the server that accepts connections from other instances and the client that connects to other instances.
<uri>.params.transportaccepts one of the following values:plain(default): turn off traffic encryption.ssl: encrypt traffic by using the TLS 1.2 protocol (Enterprise Edition only).
Example
The example below demonstrates how to enable traffic encryption by using a self-signed server certificate. The following parameters are specified for each instance:
ssl_cert_file: a path to an SSL certificate file.ssl_key_file: a path to a private SSL key file.
replicaset001: replication: failover: manual leader: instance001 iproto: advertise: peer: login: replicator instances: instance001: iproto: listen: - uri: '127.0.0.1:3301' params: transport: 'ssl' ssl_cert_file: 'certs/server.crt' ssl_key_file: 'certs/server.key' instance002: iproto: listen: - uri: '127.0.0.1:3302' params: transport: 'ssl' ssl_cert_file: 'certs/server.crt' ssl_key_file: 'certs/server.key' instance003: iproto: listen: - uri: '127.0.0.1:3303' params: transport: 'ssl' ssl_cert_file: 'certs/server.crt' ssl_key_file: 'certs/server.key'
Example on Github: ssl_without_ca
Type: stringDefault: ‘plain’Environment variable: TT_IPROTO_ADVERTISE_PEER_PARAMS_TRANSPORT, TT_IPROTO_ADVERTISE_SHARDING_PARAMS_TRANSPORT
-
<uri>.params.ssl_ca_file¶ (Optional) A path to a trusted certificate authorities (CA) file. If not set, the peer won’t be checked for authenticity.
Both a server and a client can use the
ssl_ca_fileparameter:- If it’s on the server side, the server verifies the client.
- If it’s on the client side, the client verifies the server.
- If both sides have the CA files, the server and the client verify each other.
See also: <uri>.params.transport
Type: stringDefault: nilEnvironment variable: TT_IPROTO_ADVERTISE_PEER_PARAMS_SSL_CA_FILE, TT_IPROTO_ADVERTISE_SHARDING_PARAMS_SSL_CA_FILE
-
<uri>.params.ssl_cert_file¶ A path to an SSL certificate file:
- For a server, it’s mandatory.
- For a client, it’s mandatory if the ssl_ca_file parameter is set for a server; otherwise, optional.
See also: <uri>.params.transport
Type: stringDefault: nilEnvironment variable: TT_IPROTO_ADVERTISE_PEER_PARAMS_SSL_CERT_FILE, TT_IPROTO_ADVERTISE_SHARDING_PARAMS_SSL_CERT_FILE
-
<uri>.params.ssl_ciphers¶ (Optional) A colon-separated (
:) list of SSL cipher suites the connection can use. Note that the list is not validated: if a cipher suite is unknown, Tarantool ignores it, doesn’t establish the connection, and writes to the log that no shared cipher was found.The supported cipher suites are:
- ECDHE-ECDSA-AES256-GCM-SHA384
- ECDHE-RSA-AES256-GCM-SHA384
- DHE-RSA-AES256-GCM-SHA384
- ECDHE-ECDSA-CHACHA20-POLY1305
- ECDHE-RSA-CHACHA20-POLY1305
- DHE-RSA-CHACHA20-POLY1305
- ECDHE-ECDSA-AES128-GCM-SHA256
- ECDHE-RSA-AES128-GCM-SHA256
- DHE-RSA-AES128-GCM-SHA256
- ECDHE-ECDSA-AES256-SHA384
- ECDHE-RSA-AES256-SHA384
- DHE-RSA-AES256-SHA256
- ECDHE-ECDSA-AES128-SHA256
- ECDHE-RSA-AES128-SHA256
- DHE-RSA-AES128-SHA256
- ECDHE-ECDSA-AES256-SHA
- ECDHE-RSA-AES256-SHA
- DHE-RSA-AES256-SHA
- ECDHE-ECDSA-AES128-SHA
- ECDHE-RSA-AES128-SHA
- DHE-RSA-AES128-SHA
- AES256-GCM-SHA384
- AES128-GCM-SHA256
- AES256-SHA256
- AES128-SHA256
- AES256-SHA
- AES128-SHA
- GOST2012-GOST8912-GOST8912
- GOST2001-GOST89-GOST89
For detailed information on SSL ciphers and their syntax, refer to OpenSSL documentation.
See also: <uri>.params.transport
Type: stringDefault: nilEnvironment variable: TT_IPROTO_ADVERTISE_PEER_PARAMS_SSL_CIPHERS, TT_IPROTO_ADVERTISE_SHARDING_PARAMS_SSL_CIPHERS
-
<uri>.params.ssl_key_file¶ A path to a private SSL key file:
- For a server, it’s mandatory.
- For a client, it’s mandatory if the ssl_ca_file parameter is set for a server; otherwise, optional.
If the private key is encrypted, provide a password for it in the
ssl_passwordorssl_password_fileparameter.See also: <uri>.params.transport
Type: stringDefault: nilEnvironment variable: TT_IPROTO_ADVERTISE_PEER_PARAMS_SSL_KEY_FILE, TT_IPROTO_ADVERTISE_SHARDING_PARAMS_SSL_KEY_FILE
-
<uri>.params.ssl_password¶ (Optional) A password for an encrypted private SSL key provided using
ssl_key_file. Alternatively, the password can be provided inssl_password_file.Tarantool applies the
ssl_passwordandssl_password_fileparameters in the following order:- If
ssl_passwordis provided, Tarantool tries to decrypt the private key with it. - If
ssl_passwordis incorrect or isn’t provided, Tarantool tries all passwords fromssl_password_fileone by one in the order they are written. - If
ssl_passwordand all passwords fromssl_password_fileare incorrect, or none of them is provided, Tarantool treats the private key as unencrypted.
See also: <uri>.params.transport
Type: stringDefault: nilEnvironment variable: TT_IPROTO_ADVERTISE_PEER_PARAMS_SSL_PASSWORD, TT_IPROTO_ADVERTISE_SHARDING_PARAMS_SSL_PASSWORD- If
-
<uri>.params.ssl_password_file¶ (Optional) A text file with one or more passwords for encrypted private SSL keys provided using
ssl_key_file(each on a separate line). Alternatively, the password can be provided inssl_password.See also: <uri>.params.transport
Type: stringDefault: nilEnvironment variable: TT_IPROTO_ADVERTISE_PEER_PARAMS_SSL_PASSWORD_FILE, TT_IPROTO_ADVERTISE_SHARDING_PARAMS_SSL_PASSWORD_FILE
The groups section provides the ability to define the full topology of a Tarantool cluster.
Note
groups can be defined in the global scope only.
-
groups.<group_name>¶ A group name.
The following rules are applied to group names:
- The maximum number of symbols is 63.
- Should start with a letter.
- Can contain lowercase letters (a-z).
- Can contain digits (0-9).
- Can contain the following characters:
-,_.
-
groups.<group_name>.replicasets¶ Replica sets that belong to this group. See replicasets.
-
groups.<group_name>.<config_parameter>¶ Any configuration parameter that can be defined in the group scope. For example, iproto and database configuration parameters defined at the group level are applied to all instances in this group.
Note
replicasets can be defined in the group scope only.
- replicasets.<replicaset_name>
- replicasets.<replicaset_name>.leader
- replicasets.<replicaset_name>.bootstrap_leader
- replicasets.<replicaset_name>.instances
- replicasets.<replicaset_name>.<config_parameter>
-
replicasets.<replicaset_name>¶ A replica set name.
Note that the rules applied to a replica set name are the same as for groups. Learn more in groups.<group_name>.
-
replicasets.<replicaset_name>.leader¶ A replica set leader. This option can be used to set a replica set leader when
manualreplication.failover is used.To perform controlled failover,
<replicaset_name>.leadercan be temporarily removed or set tonull.Example
replication: failover: manual groups: group001: replicasets: replicaset001: leader: instance001 instances: instance001: iproto: listen: - uri: '127.0.0.1:3301' instance002: iproto: listen: - uri: '127.0.0.1:3302' instance003: iproto: listen: - uri: '127.0.0.1:3303'
-
replicasets.<replicaset_name>.bootstrap_leader¶ A bootstrap leader for a replica set. To specify a bootstrap leader manually, you need to set replication.bootstrap_strategy to
config.Example
groups: group001: replicasets: replicaset001: replication: bootstrap_strategy: config bootstrap_leader: instance001 instances: instance001: iproto: listen: - uri: '127.0.0.1:3301' instance002: iproto: listen: - uri: '127.0.0.1:3302' instance003: iproto: listen: - uri: '127.0.0.1:3303'
-
replicasets.<replicaset_name>.<config_parameter>¶ Any configuration parameter that can be defined in the replica set scope. For example, iproto and database configuration parameters defined at the replica set level are applied to all instances in this replica set.
Note
instances can be defined in the replica set scope only.
-
instances.<instance_name>¶ An instance name.
Note that the rules applied to an instance name are the same as for groups. Learn more in groups.<group_name>.
Since version 3.3.0, a new isolated option is added to instance configuration.
The option takes boolean values, by default it is set to false.
isolated: true moves the instance it has been applied at to the isolated mode.
The isolated mode allows the user to temporarily isolate an instance and perform maintenance activities on it.
In the isolated mode:
- The instance is moved to the read-only state
- iproto stops listening for new connections
- iproto drops all the current connections
- The instance is disconnected from all the replication upstreams
- Other replicaset members exclude the isolated instance from the replication upstreams
Note
Isolated instance can’t be bootstrapped (a local snapshot is required to start).
Example
The example below shows how to isolate an instance:
groups: g: replicasets: r: instances: i-001: {} i-002: {} i-003: {} i-004: isolated: true
The labels section allows adding custom attributes to the configuration.
Attributes must be key: value pairs with string keys and values.
Note
labels can be defined in any scope.
-
labels.<label_name>¶ A value of the label with the specified name.
Example
The example below shows how to define labels on the replica set and instance levels:
groups: group001: replicasets: replicaset001: labels: dc: 'east' production: 'false' instances: instance001: labels: rack: '10' production: 'true'
See also: Adding labels
The log section defines configuration parameters related to logging.
To handle logging in your application, use the log module.
Note
log can be defined in any scope.
-
log.to¶ Define a location Tarantool sends logs to. This option accepts the following values:
stderr: write logs to the standard error stream.file: write logs to a file (see log.file).pipe: start a program and write logs to its standard input (see log.pipe).syslog: write logs to a system logger (see log.syslog.*).
Type: stringDefault: ‘stderr’Environment variable: TT_LOG_TO
-
log.file¶ Specify a file for logs destination. To write logs to a file, you need to set log.to to
file. Otherwise,log.fileis ignored.Example
The example below shows how to write logs to a file placed in the specified directory:
log: to: file file: var/log/{{ instance_name }}/instance.log
Example on GitHub: log_file
Type: stringDefault: ‘var/log/{{ instance_name }}/tarantool.log’Environment variable: TT_LOG_FILE
-
log.format¶ Specify a format that is used for a log entry. The following formats are supported:
plain: a log entry is formatted as plain text. Example:2024-04-09 11:00:10.369 [12089] main/104/interactive I> log level 5 (INFO)
json: a log entry is formatted as JSON and includes additional fields. Example:{ "time": "2024-04-09T11:00:57.174+0300", "level": "INFO", "message": "log level 5 (INFO)", "pid": 12160, "cord_name": "main", "fiber_id": 104, "fiber_name": "interactive", "file": "src/main.cc", "line": 498 }
Type: stringDefault: ‘plain’Environment variable: TT_LOG_FORMAT
-
log.level¶ Specify the level of detail logs have. There are the following levels:
- 0 –
fatal - 1 –
syserror - 2 –
error - 3 –
crit - 4 –
warn - 5 –
info - 6 –
verbose - 7 –
debug
By setting
log.level, you can enable logging of all events with severities above or equal to the given level.Example
The example below shows how to log all events with severities above or equal to the
VERBOSElevel.log: level: 'verbose'
Example on GitHub: log_level
Type: number, stringDefault: 5Environment variable: TT_LOG_LEVEL- 0 –
-
log.modules¶ Configure the specified log levels (log.level) for different modules.
You can specify a logging level for the following module types:
- Modules (files) that use the default logger. Example: Set log levels for files that use the default logger.
- Modules that use custom loggers created using the log.new() function. Example: Set log levels for modules that use custom loggers.
- The
tarantoolmodule that enables you to configure the logging level for Tarantool core messages. Specifically, it configures the logging level for messages logged from non-Lua code, including C modules. Example: Set a log level for C modules.
Example 1: Set log levels for files that use the default logger
Suppose you have two identical modules placed by the following paths:
test/module1.luaandtest/module2.lua. These modules use the default logger and look as follows:return { say_hello = function() local log = require('log') log.info('Info message from module1') end }
To configure logging levels, you need to provide module names corresponding to paths to these modules:
log: modules: test.module1: 'verbose' test.module2: 'error' app: file: 'app.lua'
To load these modules in your application (
app.lua), you need to add the correspondingrequiredirectives:module1 = require('test.module1') module2 = require('test.module2')
Given that
module1has theverboselogging level andmodule2has theerrorlevel, callingmodule1.say_hello()shows a message butmodule2.say_hello()is swallowed:-- Prints 'info' messages -- module1.say_hello() --[[ [92617] main/103/interactive/test.logging.module1 I> Info message from module1 --- ... --]] -- Swallows 'info' messages -- module2.say_hello() --[[ --- ... --]]
Example on GitHub: log_existing_modules
Example 2: Set log levels for modules that use custom loggers
This example shows how to set the
verboselevel formodule1and theerrorlevel formodule2:log: modules: module1: 'verbose' module2: 'error' app: file: 'app.lua'
To create custom loggers in your application (
app.lua), call the log.new() function:-- Creates new loggers -- module1_log = require('log').new('module1') module2_log = require('log').new('module2')
Given that
module1has theverboselogging level andmodule2has theerrorlevel, callingmodule1_log.info()shows a message butmodule2_log.info()is swallowed:-- Prints 'info' messages -- module1_log.info('Info message from module1') --[[ [16300] main/103/interactive/module1 I> Info message from module1 --- ... --]] -- Swallows 'debug' messages -- module1_log.debug('Debug message from module1') --[[ --- ... --]] -- Swallows 'info' messages -- module2_log.info('Info message from module2') --[[ --- ... --]]
Example on GitHub: log_new_modules
Example 3: Set a log level for C modules
This example shows how to set the
infolevel for thetarantoolmodule:log: modules: tarantool: 'info' app: file: 'app.lua'
The specified level affects messages logged from C modules:
ffi = require('ffi') -- Prints 'info' messages -- ffi.C._say(ffi.C.S_INFO, nil, 0, nil, 'Info message from C module') --[[ [6024] main/103/interactive I> Info message from C module --- ... --]] -- Swallows 'debug' messages -- ffi.C._say(ffi.C.S_DEBUG, nil, 0, nil, 'Debug message from C module') --[[ --- ... --]]
The example above uses the LuaJIT ffi library to call C functions provided by the
saymodule.Example on GitHub: log_existing_c_modules
-
log.nonblock¶ Specify the logging behavior if the system is not ready to write. If set to
true, Tarantool does not block during logging if the system is non-writable and writes a message instead. Using this value may improve logging performance at the cost of losing some log messages.Note
The option only has an effect if the log.to is set to
syslogorpipe.Type: booleanDefault: falseEnvironment variable: TT_LOG_NONBLOCK
-
log.pipe¶ Start a program and write logs to its standard input (
stdin). To send logs to a program’s standard input, you need to set log.to topipe.Example
In the example below, Tarantool writes logs to the standard input of
cronolog:log: to: pipe pipe: 'cronolog tarantool.log'
Example on GitHub: log_pipe
-
log.syslog.facility¶ Specify the syslog facility to be used when syslog is enabled. To write logs to syslog, you need to set log.to to
syslog.Type: stringPossible values: ‘auth’, ‘authpriv’, ‘cron’, ‘daemon’, ‘ftp’, ‘kern’, ‘lpr’, ‘mail’, ‘news’, ‘security’, ‘syslog’, ‘user’, ‘uucp’, ‘local0’, ‘local1’, ‘local2’, ‘local3’, ‘local4’, ‘local5’, ‘local6’, ‘local7’Default: ‘local7’Environment variable: TT_LOG_SYSLOG_FACILITY
-
log.syslog.identity¶ Specify an application name used to identify Tarantool messages in syslog logs. To write logs to syslog, you need to set log.to to
syslog.Type: stringDefault: ‘tarantool’Environment variable: TT_LOG_SYSLOG_IDENTITY
-
log.syslog.server¶ Set a location of a syslog server. This option accepts one of the following values:
- An IPv4 address. Example:
127.0.0.1:514. - A Unix socket path starting with
unix:. Examples:unix:/dev/logon Linux orunix:/var/run/syslogon macOS.
To write logs to syslog, you need to set log.to to
syslog.Example
In the example below, Tarantool writes logs to a syslog server that listens for logging messages on the
127.0.0.1:514address:log: to: syslog syslog: server: '127.0.0.1:514'
Example on GitHub: log_syslog
Type: stringDefault: box.NULLEnvironment variable: TT_LOG_SYSLOG_SERVER- An IPv4 address. Example:
The lua section outlines the configuration parameters related to the Lua environment within Tarantool.
Note
lua can be defined in any scope.
-
lua.memory¶ Specifies the maximum memory amount available to Lua scripts, measured in bytes.
When the specified value exceeds the current memory usage, the new limit takes effect immediately without a restart. However, when the specified value is lower than the current memory usage, a restart of the instance is required for the change to take effect.
Example to set the Lua memory limit to 4 GB:
lua: memory: 4294967296
Type: integerDefault: 2147483648 (2GB)Environment variable: TT_LUA_MEMORY
The memtx section is used to configure parameters related to the memtx engine.
Note
memtx can be defined in any scope.
- memtx.allocator
- memtx.max_tuple_size
- memtx.memory
- memtx.min_tuple_size
- memtx.slab_alloc_factor
- memtx.slab_alloc_granularity
- memtx.sort_threads
-
memtx.allocator¶ Specify the allocator that manages memory for
memtxtuples. Possible values:system– the memory is allocated as needed, checking that the quota is not exceeded. THe allocator is based on themallocfunction.small– a slab allocator. The allocator repeatedly uses a memory block to allocate objects of the same type. Note that this allocator is prone to unresolvable fragmentation on specific workloads, so you can switch tosystemin such cases.
Type: stringDefault: ‘small’Environment variable: TT_MEMTX_ALLOCATOR
-
memtx.max_tuple_size¶ Size of the largest allocation unit for the memtx storage engine in bytes. It can be increased if it is necessary to store large tuples.
Type: integerDefault: 1048576Environment variable: TT_MEMTX_MAX_TUPLE_SIZE
-
memtx.memory¶ The amount of memory in bytes that Tarantool allocates to store tuples. When the limit is reached, INSERT and UPDATE requests fail with the
ER_MEMORY_ISSUEerror. The server does not go beyond thememtx.memorylimit to allocate tuples, but there is additional memory used to store indexes and connection information.Example
In the example below, the memory size is set to 1 GB (1073741824 bytes).
memtx: memory: 1073741824
Type: integerDefault: 268435456Environment variable: TT_MEMTX_MEMORY
-
memtx.min_tuple_size¶ Size of the smallest allocation unit in bytes. It can be decreased if most of the tuples are very small.
Type: integerDefault: 16Possible values: between 8 and 1048280 inclusiveEnvironment variable: TT_MEMTX_MIN_TUPLE_SIZE
-
memtx.slab_alloc_factor¶ The multiplier for computing the sizes of memory chunks that tuples are stored in. A lower value may result in less wasted memory depending on the total amount of memory available and the distribution of item sizes.
See also: memtx.slab_alloc_granularity
Type: numberDefault: 1.05Possible values: between 1 and 2 inclusiveEnvironment variable: TT_MEMTX_SLAB_ALLOC_FACTOR
-
memtx.slab_alloc_granularity¶ Specify the granularity in bytes of memory allocation in the small allocator. The
memtx.slab_alloc_granularityvalue should meet the following conditions:- The value is a power of two.
- The value is greater than or equal to 4.
Below are few recommendations on how to adjust the
memtx.slab_alloc_granularityoption:- If the tuples in space are small and have about the same size, set the option to 4 bytes to save memory.
- If the tuples are different-sized, increase the option value to allocate tuples from the same
mempool(memory pool).
See also: memtx.slab_alloc_factor
Type: integerDefault: 8Environment variable: TT_MEMTX_SLAB_ALLOC_GRANULARITY
-
memtx.sort_threads¶ The number of threads from the thread pool used to sort keys of secondary indexes on loading a
memtxdatabase. The minimum value is 1, the maximum value is 256. The default is to use all available cores.Note
Since 3.0.0, this option replaces the approach when OpenMP threads are used to parallelize sorting. For backward compatibility, the
OMP_NUM_THREADSenvironment variable is taken into account to set the number of sorting threads.Type: integerDefault: box.NULLEnvironment variable: TT_MEMTX_SORT_THREADS
The metrics section defines configuration parameters for metrics.
Note
metrics can be defined in any scope.
-
metrics.exclude¶ An array containing the metrics to turn off. The array can contain the same values as the
excludeconfiguration parameter passed to metrics.cfg().Example
metrics: include: [ all ] exclude: [ vinyl ] labels: alias: '{{ instance_name }}'
Type: arrayDefault:[]Environment variable: TT_METRICS_EXCLUDE
-
metrics.include¶ An array containing the metrics to turn on. The array can contain the same values as the
includeconfiguration parameter passed to metrics.cfg().Type: arrayDefault:[ all ]Environment variable: TT_METRICS_INCLUDE
The process section defines configuration parameters of the Tarantool process in the system.
Note
process can be defined in any scope.
- process.background
- process.coredump
- process.title
- process.pid_file
- process.strip_core
- process.username
- process.work_dir
-
process.background¶ Run the server as a daemon process.
If this option is set to
true, Tarantool log location defined by the log.to option should be set tofile,pipe, orsyslog– anything other thanstderr, the default, because a daemon process is detached from a terminal and it can’t write to the terminal’s stderr.Important
Do not enable the background mode for applications intended to run by the
ttutility. For more information, see the tt start reference.Type: booleanDefault: falseEnvironment variable: TT_PROCESS_BACKGROUND
-
process.coredump¶ Create coredump files.
Usually, an administrator needs to call
ulimit -c unlimited(or set corresponding options in systemd’s unit file) before running a Tarantool process to get core dumps. Ifprocess.coredumpis enabled, Tarantool sets the corresponding resource limit by itself and the administrator doesn’t need to callulimit -c unlimited(see man 3 setrlimit).This option also sets the state of the
dumpableattribute, which is enabled by default, but may be dropped in some circumstances (according to man 2 prctl, see PR_SET_DUMPABLE).Type: booleanDefault: falseEnvironment variable: TT_PROCESS_COREDUMP
-
process.title¶ Add the given string to the server’s process title (it is shown in the COMMAND column for the Linux commands
ps -efandtop -c).For example, if you set the option to
myservice - {{ instance_name }}:process: title: myservice - {{ instance_name }}
ps -efmight show the Tarantool server process like this:$ ps -ef | grep tarantool 503 68100 68098 0 10:33 pts/2 00:00.10 tarantool <running>: myservice instance1
Type: stringDefault: ‘tarantool - {{ instance_name }}’Environment variable: TT_PROCESS_TITLE
-
process.pid_file¶ Store the process id in this file.
This option may contain a relative file path. In this case, it is interpreted as relative to process.work_dir.
Type: stringDefault: ‘var/run/{{ instance_name }}/tarantool.pid’Environment variable: TT_PROCESS_PID_FILE
-
process.strip_core¶ Whether coredump files should not include memory allocated for tuples – this memory can be large if Tarantool runs under heavy load. Setting to
truemeans “do not include”.Type: booleanDefault: trueEnvironment variable: TT_PROCESS_STRIP_CORE
-
process.username¶ The name of the system user to switch to after start.
Type: stringDefault: box.NULLEnvironment variable: TT_PROCESS_USERNAME
-
process.work_dir¶ A directory where Tarantool working files will be stored (database files, logs, a PID file, a console Unix socket, and other files if an application generates them in the current directory). The server instance switches to
process.work_dirwith chdir(2) after start.If set as a relative file path, it is relative to the current working directory, from where Tarantool is started. If not specified, defaults to the current working directory.
Other directory and file parameters, if set as relative paths, are interpreted as relative to
process.work_dir, for example, directories for storing snapshots and write-ahead logs.Type: stringDefault: box.NULLEnvironment variable: TT_PROCESS_WORK_DIR
The replication section defines configuration parameters related to replication.
- replication.anon
- replication.autoexpel
- replication.anon
- replication.bootstrap_strategy
- replication.connect_timeout
- replication.election_mode
- replication.election_timeout
- replication.election_fencing_mode
- replication.failover
- replication.peers
- replication.skip_conflict
- replication.sync_lag
- replication.sync_timeout
- replication.synchro_queue_max_size
- replication.synchro_quorum
- replication.synchro_timeout
- replication.threads
- replication.timeout
-
replication.anon¶ Whether to make the current instance act as an anonymous replica. Anonymous replicas are read-only and can be used, for example, for backups.
To make the specified instance act as an anonymous replica, set
replication.anontotrue:instance003: replication: anon: true
You can find the full example on GitHub: anonymous_replica.
Anonymous replicas are not displayed in the box.info.replication section. You can check their status using box.info.replication_anon().
While anonymous replicas are read-only, you can write data to replication-local and temporary spaces (created with
is_local = trueandtemporary = true, respectively). Given that changes to replication-local spaces are allowed, an anonymous replica might increase the0component of the vclock value.Here are the limitations of having anonymous replicas in a replica set:
- A replica set must contain at least one non-anonymous instance.
- An anonymous replica can’t be configured as a writable instance by setting database.mode to
rwor making it a leader using <replicaset_name>.leader. - If replication.failover is set to
election, an anonymous replica can have replication.election_mode set tooffonly. - If replication.failover is set to
supervised, an external failover coordinator doesn’t consider anonymous replicas when selecting a bootstrap or replica set leader.
Note
Anonymous replicas are not registered in the _cluster space. This means that there is no limitation on the number of anonymous replicas in a replica set.
Type: booleanDefault:falseEnvironment variable: TT_REPLICATION_ANON
-
replication.autoexpel¶ Since: 3.3.0
The
replication.autoexpeloption designed for managing dynamic clusters using YAML-based configurations. It enables the automatic expulsion of instances that are removed from the YAML configuration.Only instances with names that match the specified prefix are considered for expulsion; all others are excluded. Additionally, instances without a persistent name are ignored.
If an instance is in read-write mode and has the latest database schema, it initiates the expulsion of instances that:
- Match the specified prefix
- Absent from the updated YAML configuration
The expulsion process follows the standard procedure, involving the removal of the instance from the
_clustersystem space.The
autoexpellogic is activated during specific events:- Startup. When the cluster starts,
autoexpelchecks and removes instances not matching the updated configuration. - Reconfiguration. When the YAML configuration is reloaded,
autoexpelcompares the current state to the updated configuration and performs necessary expulsions. box.statuswatcher event. Changes detected by thebox.statuswatcher also trigger theautoexpelmechanism.
autoexpeldoes not take any actions on newly joined instances unless one of the triggering events occurs. This means that an instance meeting theautoexpelcriterion can still join the cluster, but it may be removed later during reconfiguration or on subsequent triggering events.Note
The
replication.autoexpeloption governs the expelling process and is configurable at the replicaset, group, and global levels. It is not applicable at the instance level.Configuration fields
by(string, default:nil): specifies theautoexpelcriterion. Currently, onlyprefixis supported and must be explicitly set.enabled(boolean, default:false): enables or disables theautoexpellogic.prefix(string, default:nil): defines the pattern for instance names that are considered part of the cluster.
replication.autoexpel_bypurpose is to define the criterion used for determining which instances in a cluster are subject to theautoexpelprocess.The
byfield helps differentiate between:
Instances that are part of the cluster and should adhere to the YAML configuration.
- Instances or tools (e.g., CDC tools) that use the replication channel but are not part of the cluster configuration.
The default value of by is
nil, meaning noautoexpelcriterion is applied unless explicitly set.Currently, the only supported value for by is
prefix. Theprefixvalue instructs the system to identify instances based on their names, matching them against a prefix pattern defined in the configuration.If the
autoexpelfeature is enabled (enabled: true), thebyfield must be explicitly set toprefix.The absence of this field or an unsupported value will result in configuration errors.
replication: autoexpel: enabled: true by: prefix prefix: '{{ replicaset_name }}'Type: stringDefault:nilEnvironment variable: TT_REPLICATION_AUTOEXPEL_BY
The
replication.autoexpel_enabledfield is a boolean configuration option that determines whether the autoexpel logic is active for the cluster. This feature is designed to automatically manage dynamic cluster configurations by removing instances that are no longer present in the YAML configuration.Note
By default, the
enabledfield is set tofalse, meaning theautoexpellogic is turned off. This ensures that no instances are automatically removed unless explicitly configured.Enabling
autoexpellogicTo enable
autoexpel, you should set enabled to true in thereplication.autoexpelsection of your YAML configuration:replication: autoexpel: enabled: true by: prefix prefix: '{{ replicaset_name }}'To disable
autoexpel, set enabled tofalse.Dependencies
If
enabledis set totrue, the following fields are required:
by: specifies the criterion forautoexpel(e.g.,prefix).prefix: defines the pattern used to match instance names for expulsion.Failure to configure these fields when enabled is true will result in a configuration error.
Type: booleanDefault:falseEnvironment variable: TT_REPLICATION_AUTOEXPEL_ENABLED
The
prefixfield filters instances for expulsion by differentiating cluster instances (from the YAML configuration) from external services (e.g., CDC tools). Only instances matching the prefix are considered.A consistent naming pattern ensures the
_clustersystem space automatically aligns with the YAML configuration.If the
prefixfield is not set (nil), theautoexpellogic cannot identify instances for expulsion, and the feature will not function. This field is mandatory whenreplication.autoexpel_enabledis set totrue.How it works:
- The prefix filters instance names (e.g.,
{{ replicaset_name }}for replicaset-specific names ori-for names starting withi-).- Instances matching the prefix and removed from the YAML configuration are expelled.
- Unnamed instances or those not matching the prefix are ignored.
Dynamic prefix based on replicaset name:
replication: autoexpel: enabled: true by: prefix prefix: '{{ replicaset_name }}'In this setup:
- Instances are grouped by replicaset names (e.g.,
r-001-i-001forreplicaset r-001).- The prefix ensures that only instances with names matching the replicaset name are auto expelled when removed from the configuration.
Static prefix for matching patterns:
replication: autoexpel: enabled: true by: prefix prefix: 'i-'In this setup:
- All instances with names starting with
i-(e.g.,i-001,i-002) are considered for expulsion.- This is useful when instances follow a uniform naming convention.
Type: stringDefault:nilEnvironment variable: TT_REPLICATION_AUTOEXPEL_PREFIX
- Create a
config.yamlfile with the following content:credentials: users: guest: roles: [super] replication: failover: manual autoexpel: enabled: true by: prefix prefix: '{{ replicaset_name }}' iproto: listen: - uri: 'unix/:./var/run/{{ instance_name }}.iproto' groups: g-001: replicasets: r-001: leader: r-001-i-001 instances: r-001-i-001: {} r-001-i-002: {} r-001-i-003: {}
- This configuration:
- Sets up authentication with a guest user assigned the super role.
- Enables the
autoexpeloption to automatically expel instances not present in the YAML file.- Defines instance names based on a prefix pattern:
{{ replicaset_name }}.- Lists three instances:
r-001-i-001,r-001-i-002, andr-001-i-003.
- Open terminal window and start three instances using the following commands:
tarantool --name r-001-i-001 --config config.yaml -itarantool --name r-001-i-002 --config config.yaml -itarantool --name r-001-i-003 --config config.yaml -i
- Edit
config.yamland remove the following entry forr-001-i-003:The updated
config.yamlshould look like this:groups: g-001: replicasets: r-001: leader: r-001-i-001 instances: r-001-i-001: {} r-001-i-002: {}Save the file.
- For the leader instance (
r-001-i-001), check the_clusterspace:Hint
The
_clustersystem space in Tarantool stores metadata about all instances currently recognized as part of the cluster. It shows which instances are registered and active.You should see
r-001-i-003still listed in the_clustersystem space.
- Reload the configuration:
config = require('config') config:reload()
- Verify the changes:
box.space._cluster:fselect()After the reload,
r-001-i-003should no longer appear in the_clustersystem space.
-
replication.bootstrap_strategy¶ Specifies a strategy used to bootstrap a replica set. The following strategies are available:
auto: a node doesn’t boot if half or more of the other nodes in a replica set are not connected. For example, if a replica set contains 2 or 3 nodes, a node requires 2 connected instances. In the case of 4 or 5 nodes, at least 3 connected instances are required. Moreover, a bootstrap leader fails to boot unless every connected node has chosen it as a bootstrap leader.config: use the specified node to bootstrap a replica set. To specify the bootstrap leader, use the <replicaset_name>.bootstrap_leader option.supervised: a bootstrap leader isn’t chosen automatically but should be appointed using box.ctl.make_bootstrap_leader() on the desired node.legacy(deprecated since 2.11.0): a node requires the replication_connect_quorum number of other nodes to be connected. This option is added to keep the compatibility with the current versions of Cartridge and might be removed in the future.
Type: stringDefault:autoEnvironment variable: TT_REPLICATION_BOOTSTRAP_STRATEGY
-
replication.connect_timeout¶ A timeout (in seconds) a replica waits when trying to connect to a master in a cluster. See orphan status for details.
This parameter is different from replication.timeout, which a master uses to disconnect a replica when the master receives no acknowledgments of heartbeat messages.
Type: numberDefault: 30Environment variable: TT_REPLICATION_CONNECT_TIMEOUT
-
replication.election_mode¶ A role of a replica set node in the leader election process.
The possible values are:
off: a node doesn’t participate in the election activities.voter: a node can participate in the election process but can’t be a leader.candidate: a node should be able to become a leader.manual: allow to control which instance is the leader explicitly instead of relying on automated leader election. By default, the instance acts like a voter – it is read-only and may vote for other candidate instances. Once box.ctl.promote() is called, the instance becomes a candidate and starts a new election round. If the instance wins the elections, it becomes a leader but won’t participate in any new elections.
Note
You can set
replication.election_modeto a value other thanoffif the replication.failover mode iselection.Type: stringDefault: box.NULL (the actual default value depends on replication.failover)Environment variable: TT_REPLICATION_ELECTION_MODE
-
replication.election_timeout¶ Specifies the timeout (in seconds) between election rounds in the leader election process if the previous round ended up with a split vote.
It is quite big, and for most of the cases, it can be lowered to 300-400 ms.
To avoid the split vote repeat, the timeout is randomized on each node during every new election, from 100% to 110% of the original timeout value. For example, if the timeout is 300 ms and there are 3 nodes started the election simultaneously in the same term, they can set their election timeouts to 300, 310, and 320 respectively, or to 305, 302, and 324, and so on. In that way, the votes will never be split because the election on different nodes won’t be restarted simultaneously.
Type: numberDefault: 5Environment variable: TT_REPLICATION_ELECTION_TIMEOUT
-
replication.election_fencing_mode¶ Specifies the leader fencing mode that affects the leader election process. When the parameter is set to
softorstrict, the leader resigns its leadership if it has less than replication.synchro_quorum of alive connections to the cluster nodes. The resigning leader receives the status of a follower in the current election term and becomes read-only.- In
softmode, a connection is considered dead if there are no responses for 4 * replication.timeout seconds both on the current leader and the followers. - In
strictmode, a connection is considered dead if there are no responses for 2 * replication.timeout seconds on the current leader and 4 * replication.timeout seconds on the followers. This improves the chances that there is only one leader at any time.
Fencing applies to the instances that have the replication.election_mode set to
candidateormanual. To turn off leader fencing, setelection_fencing_modetooff.Type: stringDefault:softPossible values:off,soft,strictEnvironment variable: TT_REPLICATION_ELECTION_FENCING_MODE- In
-
replication.failover¶ A failover mode used to take over a master role when the current master instance fails. The following modes are available:
offLeadership in a replica set is controlled using the database.mode option. In this case, you can set the
database.modeoption torwon all instances in a replica set to make a master-master configuration.The default
database.modeis determined as follows:rwif there is one instance in a replica set;roif there are several instances.manualLeadership in a replica set is controlled using the <replicaset_name>.leader option. In this case, a master-master configuration is forbidden.
In the
manualmode, the database.mode option cannot be set explicitly. The leader is configured in the read-write mode, all the other instances are read-only.electionAutomated leader election is used to control leadership in a replica set.
In the
electionmode, database.mode and <replicaset_name>.leader shouldn’t be set explicitly.supervised(Enterprise Edition only)Leadership in a replica set is controlled using an external failover coordinator.
In the
supervisedmode, database.mode and <replicaset_name>.leader shouldn’t be set explicitly.
See also: Replication tutorials
Note
replication.failovercan be defined in the global, group, and replica set scope.Example
In the example below, the following configuration options are specified:
- In the credentials section, the
replicatoruser with thereplicationrole is created. - iproto.advertise.peer specifies that other instances should connect to an address defined in iproto.listen using the
replicatoruser. replication.failoverspecifies that a master instance should be set manually.- <replicaset_name>.leader sets
instance001as a replica set leader.
credentials: users: replicator: password: 'topsecret' roles: [replication] iproto: advertise: peer: login: replicator replication: failover: manual groups: group001: replicasets: replicaset001: leader: instance001 instances: instance001: iproto: listen: - uri: '127.0.0.1:3301' instance002: iproto: listen: - uri: '127.0.0.1:3302' instance003: iproto: listen: - uri: '127.0.0.1:3303'
Type: stringDefault:offEnvironment variable: TT_REPLICATION_FAILOVER
-
replication.peers¶ URIs of instances that constitute a replica set. These URIs are used by an instance to connect to another instance as a replica.
Alternatively, you can use iproto.advertise.peer to specify a URI used to advertise the current instance to other cluster members.
Example
In the example below, the following configuration options are specified:
- In the credentials section, the
replicatoruser with thereplicationrole is created. replication.peersspecifies URIs of replica set instances.
credentials: users: replicator: password: 'topsecret' roles: [replication] replication: peers: - replicator:topsecret@127.0.0.1:3301 - replicator:topsecret@127.0.0.1:3302 - replicator:topsecret@127.0.0.1:3303
- In the credentials section, the
-
replication.skip_conflict¶ By default, if a replica adds a unique key that another replica has added, replication stops with the
ER_TUPLE_FOUNDerror. Ifreplication.skip_conflictis set totrue, such errors are ignored.Note
Instead of saving the broken transaction to the write-ahead log, it is written as
NOP(No operation).Type: booleanDefault: falseEnvironment variable: TT_REPLICATION_SKIP_CONFLICT
-
replication.sync_lag¶ The maximum delay (in seconds) between the time when data is written to the master and the time when it is written to a replica. If
replication.sync_lagis set tonilor 365 * 100 * 86400 (TIMEOUT_INFINITY), a replica is always considered to be “synced”.Note
This parameter is ignored during bootstrap. See orphan status for details.
Type: numberDefault: 10Environment variable: TT_REPLICATION_SYNC_LAG
-
replication.sync_timeout¶ The timeout (in seconds) that a node waits when trying to sync with other nodes in a replica set after connecting or during a configuration update. This could fail indefinitely if replication.sync_lag is smaller than network latency, or if the replica cannot keep pace with master updates. If
replication.sync_timeoutexpires, the replica enters orphan status.Type: numberDefault: 0Environment variable: TT_REPLICATION_SYNC_TIMEOUT
-
replication.synchro_queue_max_size¶ Since: 3.3.0
The maximum size of the synchronous transaction queue on a master node, in bytes. The size limit isn’t strict, i.e. if there’s at least one free byte, the whole write request fits and no blocking is involved. This parameter ensures that the queue does not grow indefinitely, potentially impacting performance and resource usage, and applies only to the master node.
The
0value disables the limit.If the synchronous queue reaches the configured size limit, new transactions attempting to enter the queue are discarded. In such cases, the system returns an error to the user:
The synchronous transaction queue is full.This size limitation does not apply during the recovery process. Transactions processed during recovery are unaffected by the queue size limit.
Use the following command to view the current size of the synchronous queue:
box.info.synchro.queue.size
Set the synchronous queue size limit in the configuration file:
replication: synchro_queue_max_size: 33554432 # Limit set to 32 MB
Type: integerDefault: 16777216 (16 MB)Environment variable: TT_REPLICATION_SYNCHRO_QUEUE_MAX_SIZE
-
replication.synchro_quorum¶ A number of replicas that should confirm the receipt of a synchronous transaction before it can finish its commit.
This option supports dynamic evaluation of the quorum number. For example, the default value is
N / 2 + 1whereNis the current number of replicas registered in a cluster. Once any replicas are added or removed, the expression is re-evaluated automatically.Note that the default value (
at least 50% of the cluster size + 1) guarantees data reliability. Using a value less than the canonical one might lead to unexpected results, including a split-brain.replication.synchro_quorumis not used on replicas. If the master fails, the pending synchronous transactions will be kept waiting on the replicas until a new master is elected.Note
replication.synchro_quorumdoes not account for anonymous replicas.Type: string, numberDefault:N / 2 + 1Environment variable: TT_REPLICATION_SYNCHRO_QUORUM
-
replication.synchro_timeout¶ For synchronous replication only. Specify how many seconds to wait for a synchronous transaction quorum replication until it is declared failed and is rolled back.
It is not used on replicas, so if the master fails, the pending synchronous transactions will be kept waiting on the replicas until a new master is elected.
Type: numberDefault: 5Environment variable: TT_REPLICATION_SYNCHRO_TIMEOUT
-
replication.threads¶ The number of threads spawned to decode the incoming replication data.
In most cases, one thread is enough for all incoming data. Possible values range from 1 to 1000. If there are multiple replication threads, connections to serve are distributed evenly between the threads.
Type: integerDefault: 1Environment variable: TT_REPLICATION_THREADS
-
replication.timeout¶ A time interval (in seconds) used by a master to send heartbeat requests to a replica when there are no updates to send to this replica. For each request, a replica should return a heartbeat acknowledgment.
If a master or replica gets no heartbeat message for
4 * replication.timeoutseconds, a connection is dropped and a replica tries to reconnect to the master.See also: Monitoring a replica set
Type: numberDefault: 1Environment variable: TT_REPLICATION_TIMEOUT
This section describes configuration parameters related to application roles.
Note
Configuration parameters related to roles can be defined in any scope.
-
roles¶ Specify the roles of an instance. To specify a role’s configuration, use the roles_cfg option.
See also: Enabling and configuring roles
Type: arrayDefault: nilEnvironment variable: TT_ROLES
-
roles_cfg¶ Specify a role’s configuration. This option accepts a role name as the key and a role’s configuration as the value. To specify the roles of an instance, use the roles option.
See also: Enabling and configuring roles
Tip
The experimental.config.utils.schema built-in module provides an API for managing user-defined configurations of applications (
app.cfg) and roles (roles_cfg).Type: mapDefault: nilEnvironment variable: TT_ROLES_CFG
Enterprise Edition
Configuring security parameters is available in the Enterprise Edition only.
The security section defines configuration parameters related to various security settings.
Note
security can be defined in any scope.
- security.auth_delay
- security.auth_retries
- security.auth_type
- security.disable_guest
- security.password_enforce_digits
- security.password_enforce_lowercase
- security.password_enforce_specialchars
- security.password_enforce_uppercase
- security.password_history_length
- security.password_lifetime_days
- security.password_min_length
- security.secure_erasing
-
security.auth_delay¶ Specify a period of time (in seconds) that a specific user should wait for the next attempt after failed authentication.
The security.auth_retries option lets a client try to authenticate the specified number of times before
security.auth_delayis enforced.In the configuration below, Tarantool lets a client try to authenticate with the same username three times. At the fourth attempt, the authentication delay configured with
security.auth_delayis enforced. This means that a client should wait 10 seconds after the first failed attempt.security: auth_delay: 10 auth_retries: 2
Type: numberDefault: 0Environment variable: TT_SECURITY_AUTH_DELAY
-
security.auth_retries¶ Specify the maximum number of authentication retries allowed before security.auth_delay is enforced. The default value is 0, which means
security.auth_delayis enforced after the first failed authentication attempt.The retry counter is reset after
security.auth_delayseconds since the first failed attempt. For example, if a client tries to authenticate fewer thansecurity.auth_retriestimes withinsecurity.auth_delayseconds, no authentication delay is enforced. The retry counter is also reset after any successful authentication attempt.Type: integerDefault: 0Environment variable: TT_SECURITY_AUTH_RETRIES
-
security.auth_type¶ Specify a protocol used to authenticate users. The possible values are:
chap-sha1: use the CHAP protocol withSHA-1hashing applied to passwords.pap-sha256: use PAP authentication with theSHA256hashing algorithm.
Note that CHAP stores password hashes in the
_userspace unsalted. If an attacker gains access to the database, they may crack a password, for example, using a rainbow table. For PAP, a password is salted with a user-unique salt before saving it in the database, which keeps the database protected from cracking using a rainbow table.To enable PAP, specify the
security.auth_typeoption as follows:security: auth_type: 'pap-sha256'
Type: stringDefault: ‘chap-sha1’Environment variable: TT_SECURITY_AUTH_TYPE
-
security.disable_guest¶ If true, turn off access over remote connections from unauthenticated or guest users. This option affects connections between cluster members and net.box connections.
Type: booleanDefault: falseEnvironment variable: TT_SECURITY_DISABLE_GUEST
-
security.password_enforce_digits¶ If true, a password should contain digits (0-9).
Type: booleanDefault: falseEnvironment variable: TT_SECURITY_PASSWORD_ENFORCE_DIGITS
-
security.password_enforce_lowercase¶ If true, a password should contain lowercase letters (a-z).
Type: booleanDefault: falseEnvironment variable: TT_SECURITY_PASSWORD_ENFORCE_LOWERCASE
-
security.password_enforce_specialchars¶ If true, a password should contain at least one special character (such as
&|?!@$).Type: booleanDefault: falseEnvironment variable: TT_SECURITY_PASSWORD_ENFORCE_SPECIALCHARS
-
security.password_enforce_uppercase¶ If true, a password should contain uppercase letters (A-Z).
Type: booleanDefault: falseEnvironment variable: TT_SECURITY_PASSWORD_ENFORCE_UPPERCASE
-
security.password_history_length¶ Specify the number of unique new user passwords before an old password can be reused.
Note
Tarantool uses the
auth_historyfield in the box.space._user system space to store user passwords.Type: integerDefault: 0Environment variable: TT_SECURITY_PASSWORD_HISTORY_LENGTH
-
security.password_lifetime_days¶ Specify the maximum period of time (in days) a user can use the same password. When this period ends, a user gets the “Password expired” error on a login attempt. To restore access for such users, use box.schema.user.passwd.
Note
The default 0 value means that a password never expires.
Type: integerDefault: 0Environment variable: TT_SECURITY_PASSWORD_LIFETIME_DAYS
-
security.password_min_length¶ Specify the minimum number of characters for a password.
Type: integerDefault: 0Environment variable: TT_SECURITY_PASSWORD_MIN_LENGTH
-
security.secure_erasing¶ If true, forces Tarantool to overwrite a data file a few times before deletion to render recovery of a deleted file impossible. The option applies to both
.xlogand.snapfiles as well as Vinyl data files.Type: booleanDefault: falseEnvironment variable: TT_SECURITY_SECURE_ERASING
The sharding section defines configuration parameters related to sharding.
Note
Sharding support requires installing the vshard module.
The minimum required version of vshard is 0.1.25.
- sharding.bucket_count
- sharding.discovery_mode
- sharding.failover_ping_timeout
- sharding.lock
- sharding.rebalancer_disbalance_threshold
- sharding.rebalancer_max_receiving
- sharding.rebalancer_max_sending
- sharding.rebalancer_mode
- sharding.roles
- sharding.sched_move_quota
- sharding.sched_ref_quota
- sharding.shard_index
- sharding.sync_timeout
- sharding.weight
- sharding.zone
-
sharding.bucket_count¶ The total number of buckets in a cluster. Learn more in Bucket count.
Note
This option should be defined at the global level.
Example
sharding: bucket_count: 1000
Type: integerDefault: 3000Environment variable: TT_SHARDING_BUCKET_COUNT
-
sharding.discovery_mode¶ A mode of the background discovery fiber used by the router to find buckets. Learn more in vshard.router.discovery_set().
Note
This option should be defined at the global level.
Type: stringDefault: ‘on’Possible values: ‘on’, ‘off’, ‘once’Environment variable: TT_SHARDING_DISCOVERY_MODE
-
sharding.failover_ping_timeout¶ The timeout (in seconds) after which a node is considered unavailable if there are no responses during this period. The failover fiber is used to detect if a node is down.
Note
This option should be defined at the global level.
Type: numberDefault: 5Environment variable: TT_SHARDING_FAILOVER_PING_TIMEOUT
-
sharding.lock¶ Whether a replica set is locked. A locked replica set cannot receive new buckets nor migrate its own buckets.
Note
sharding.lockcan be specified at the replica set level or higher.Type: booleanDefault: nilEnvironment variable: TT_SHARDING_LOCK
-
sharding.rebalancer_disbalance_threshold¶ The maximum bucket disbalance threshold (in percent). The disbalance is calculated for each replica set using the following formula:
|etalon_bucket_count - real_bucket_count| / etalon_bucket_count * 100
Note
This option should be defined at the global level.
Type: numberDefault: 1Environment variable: TT_SHARDING_REBALANCER_DISBALANCE_THRESHOLD
-
sharding.rebalancer_max_receiving¶ The maximum number of buckets that can be received in parallel by a single replica set. This number must be limited because the rebalancer sends a large number of buckets from the existing replica sets to the newly added one. This produces a heavy load on the new replica set.
Note
This option should be defined at the global level.
Example
Suppose,
rebalancer_max_receivingis equal to 100 andbucket_countis equal to 1000. There are 3 replica sets with 333, 333, and 334 buckets on each respectively. When a new replica set is added, each replica set’setalon_bucket_countbecomes equal to 250. Rather than receiving all 250 buckets at once, the new replica set receives 100, 100, and 50 buckets sequentially.Type: integerDefault: 100Environment variable: TT_SHARDING_REBALANCER_MAX_RECEIVING
-
sharding.rebalancer_max_sending¶ The degree of parallelism for parallel rebalancing.
Note
This option should be defined at the global level.
Type: integerDefault: 1Maximum: 15Environment variable: TT_SHARDING_REBALANCER_MAX_SENDING
-
sharding.rebalancer_mode¶ Since: 3.1.0
Configure how a rebalancer is selected:
auto(default): if there are no replica sets with therebalancersharding role (sharding.roles), a replica set with the rebalancer is selected automatically among all replica sets.manual: one of the replica sets should have therebalancersharding role. The rebalancer is in this replica set.off: rebalancing is turned off regardless of whether a replica set with therebalancersharding role exists or not.
Note
This option should be defined at the global level.
Type: stringDefault: ‘auto’Environment variable: TT_SHARDING_REBALANCER_MODE
-
sharding.roles¶ Roles of a replica set in regard to sharding. A replica set can have the following roles:
router: a replica set acts as a router.storage: a replica set acts as a storage.rebalancer: a replica set acts as a rebalancer.
The
rebalancerrole is optional. If it is not specified, a rebalancer is selected automatically from the master instances of replica sets.There can be at most one replica set with the
rebalancerrole. Additionally, this replica set should have astoragerole.Example
replicasets: storage-a: sharding: roles: [storage, rebalancer]
See also: Sharding roles
Note
sharding.rolescan be specified at the replica set level or higher.Type: arrayDefault: nilEnvironment variable: TT_SHARDING_ROLES
-
sharding.sched_move_quota¶ A scheduler’s bucket move quota used by the rebalancer.
sched_move_quotadefines how many bucket moves can be done in a row if there are pending storage refs. Then, bucket moves are blocked and a router continues making map-reduce requests.See also: sharding.sched_ref_quota
Note
This option should be defined at the global level.
Type: numberDefault: 1Environment variable: TT_SHARDING_SCHED_MOVE_QUOTA
-
sharding.sched_ref_quota¶ A scheduler’s storage ref quota used by a router’s map-reduce API. For example, the vshard.router.map_callrw() function implements consistent map-reduce over the entire cluster.
sched_ref_quotadefines how many storage refs, therefore map-reduce requests, can be executed on the storage in a row if there are pending bucket moves. Then, storage refs are blocked and the rebalancer continues bucket moves.See also: sharding.sched_move_quota
Note
This option should be defined at the global level.
Type: numberDefault: 300Environment variable: TT_SHARDING_SCHED_REF_QUOTA
-
sharding.shard_index¶ The name or ID of a TREE index over the bucket id. Spaces without this index do not participate in a sharded Tarantool cluster and can be used as regular spaces if needed. It is necessary to specify the first part of the index, other parts are optional.
Note
This option should be defined at the global level.
See also: Data definition
Type: stringDefault: ‘bucket_id’Environment variable: TT_SHARDING_SHARD_INDEX
-
sharding.sync_timeout¶ The timeout to wait for synchronization of the old master with replicas before demotion. Used when switching a master or when manually calling the sync() function.
Note
This option should be defined at the global level.
Type: numberDefault: 1Environment variable: TT_SHARDING_SYNC_TIMEOUT
-
sharding.weight¶ Since: 3.1.0
The relative amount of data that a replica set can store. Learn more at Replica set weights.
Note
sharding.weightcan be specified at the replica set level.Type: numberDefault: 1Environment variable: TT_SHARDING_WEIGHT
-
sharding.zone¶ A zone that can be set for routers and replicas. This allows sending read-only requests not only to a master instance but to any available replica that is the nearest to the router.
Note
sharding.zonecan be specified at any level.Type: integerDefault: nilEnvironment variable: TT_SHARDING_ZONE
The snapshot section defines configuration parameters related to the snapshot files.
To learn more about the snapshots’ configuration, check the Persistence page.
Note
snapshot can be defined in any scope.
-
snapshot.dir¶ A directory where memtx stores snapshot (.snap) files. A relative path in this option is interpreted as relative to
process.work_dir.By default, snapshots and WAL files are stored in the same directory. However, you can set different values for the
snapshot.dirand wal.dir options to store them on different physical disks for performance matters.Type: stringDefault: ‘var/lib/{{ instance_name }}’Environment variable: TT_SNAPSHOT_DIR
-
snapshot.snap_io_rate_limit¶ Reduce the throttling effect of box.snapshot() on INSERT/UPDATE/DELETE performance by setting a limit on how many megabytes per second it can write to disk. The same can be achieved by splitting wal.dir and snapshot.dir locations and moving snapshots to a separate disk. The limit also affects what box.stat.vinyl().regulator may show for the write rate of dumps to
.runand.indexfiles.Type: numberDefault: box.NULLEnvironment variable: TT_SNAPSHOT_SNAP_IO_RATE_LIMIT
-
snapshot.count¶ The maximum number of snapshots that are stored in the snapshot.dir directory. If the number of snapshots after creating a new one exceeds this value, the Tarantool garbage collector deletes old snapshots. If
snapshot.countis set to zero, the garbage collector does not delete old snapshots.Example
In the example, the checkpoint daemon creates a snapshot every two hours until it has created three snapshots. After creating a new snapshot (the fourth one), the oldest snapshot and any associated write-ahead-log files are deleted.
snapshot: by: interval: 7200 count: 3
Note
Snapshots will not be deleted if replication is ongoing and the file has not been relayed to a replica. Therefore,
snapshot.counthas no effect unless all replicas are alive.Type: integerDefault: 2Environment variable: TT_SNAPSHOT_COUNT
-
snapshot.by.interval¶ The interval in seconds between actions by the checkpoint daemon. If the option is set to a value greater than zero, and there is activity that causes change to a database, then the checkpoint daemon calls box.snapshot() every
snapshot.by.intervalseconds, creating a new snapshot file each time. If the option is set to zero, the checkpoint daemon is disabled.Example
In the example, the checkpoint daemon creates a new database snapshot every two hours, if there is activity.
by: interval: 7200
Type: numberDefault: 3600Environment variable: TT_SNAPSHOT_BY_INTERVAL
-
snapshot.by.wal_size¶ The threshold for the total size in bytes for all WAL files created since the last snapshot taken. Once the configured threshold is exceeded, the WAL thread notifies the checkpoint daemon that it must make a new snapshot and delete old WAL files.
Type: integerDefault: 10^18Environment variable: TT_SNAPSHOT_BY_WAL_SIZE
The sql section defines configuration parameters related to SQL.
Note
sql can be defined in any scope.
-
sql.cache_size¶ The maximum cache size (in bytes) for all SQL prepared statements. To see the actual cache size, use box.info.sql().cache.size.
Type: integerDefault: 5242880Environment variable: TT_SQL_CACHE_SIZE
The vinyl section defines configuration parameters related to the
vinyl storage engine.
Note
vinyl can be defined in any scope.
- vinyl.bloom_fpr
- vinyl.cache
- vinyl.defer_deletes
- vinyl.dir
- vinyl.max_tuple_size
- vinyl.memory
- vinyl.page_size
- vinyl.range_size
- vinyl.read_threads
- vinyl.run_count_per_level
- vinyl.run_size_ratio
- vinyl.timeout
- vinyl.write_threads
-
vinyl.bloom_fpr¶ A bloom filter’s false positive rate – the suitable probability of the bloom filter to give a wrong result. The
vinyl.bloom_fprsetting is a default value for the bloom_fpr option passed tospace_object:create_index().Type: numberDefault: 0.05Environment variable: TT_VINYL_BLOOM_FPR
-
vinyl.cache¶ The cache size for the vinyl storage engine. The cache can be resized dynamically.
Type: integerDefault: 128 * 1024 * 1024Environment variable: TT_VINYL_CACHE
-
vinyl.defer_deletes¶ Enable the deferred DELETE optimization in vinyl. It was disabled by default since Tarantool version 2.10 to avoid possible performance degradation of secondary index reads.
Type: booleanDefault: falseEnvironment variable: TT_VINYL_DEFER_DELETES
-
vinyl.dir¶ A directory where vinyl files or subdirectories will be stored.
This option may contain a relative file path. In this case, it is interpreted as relative to process.work_dir.
Type: stringDefault: ‘var/lib/{{ instance_name }}’Environment variable: TT_VINYL_DIR
-
vinyl.max_tuple_size¶ The size of the largest allocation unit, for the vinyl storage engine. It can be increased if it is necessary to store large tuples.
Type: integerDefault: 1024 * 1024Environment variable: TT_VINYL_MAX_TUPLE_SIZE
-
vinyl.memory¶ The maximum number of in-memory bytes that vinyl uses.
Type: integerDefault: 128 * 1024 * 1024Environment variable: TT_VINYL_MEMORY
-
vinyl.page_size¶ The page size. A page is a read/write unit for vinyl disk operations. The
vinyl.page_sizesetting is a default value for the page_size option passed tospace_object:create_index().Type: integerDefault: 8 * 1024Environment variable: TT_VINYL_PAGE_SIZE
-
vinyl.range_size¶ The default maximum range size for a vinyl index, in bytes. The maximum range size affects the decision of whether to split a range.
If
vinyl.range_sizeis specified (but the value is notnullor 0), then it is used as the default value for the range_size option passed tospace_object:create_index().If
vinyl.range_sizeis not specified (or is explicitly set tonullor 0), andrange_sizeis not specified when the index is created, then Tarantool sets a value later depending on performance considerations. To see the actual value, use index_object:stat().range_size.Type: integerDefault: box.NULL (means that an effective default is determined in runtime)Environment variable: TT_VINYL_RANGE_SIZE
-
vinyl.read_threads¶ The maximum number of read threads that vinyl can use for concurrent operations, such as I/O and compression.
Type: integerDefault: 1Environment variable: TT_VINYL_READ_THREADS
-
vinyl.run_count_per_level¶ The maximum number of runs per level in the vinyl LSM tree. If this number is exceeded, a new level is created. The
vinyl.run_count_per_levelsetting is a default value for the run_count_per_level option passed tospace_object:create_index().Type: integerDefault: 2Environment variable: TT_VINYL_RUN_COUNT_PER_LEVEL
-
vinyl.run_size_ratio¶ The ratio between the sizes of different levels in the LSM tree. The
vinyl.run_size_ratiosetting is a default value for the run_size_ratio option passed tospace_object:create_index().Type: numberDefault: 3.5Environment variable: TT_VINYL_RUN_SIZE_RATIO
-
vinyl.timeout¶ The vinyl storage engine has a scheduler that performs compaction. When vinyl is low on available memory, the compaction scheduler may be unable to keep up with incoming update requests. In that situation, queries may time out after
vinyl.timeoutseconds. This should rarely occur, since normally vinyl throttles inserts when it is running low on compaction bandwidth. Compaction can also be initiated manually with index_object:compact().Type: integerDefault: 60Environment variable: TT_VINYL_TIMEOUT
-
vinyl.write_threads¶ The maximum number of write threads that vinyl can use for some concurrent operations, such as I/O and compression.
Type: integerDefault: 4Environment variable: TT_VINYL_WRITE_THREADS
The wal section defines configuration parameters related to write-ahead log.
To learn more about the WAL configuration, check the Persistence page.
Note
wal can be defined in any scope.
- wal.cleanup_delay
- wal.dir
- wal.dir_rescan_delay
- wal.max_size
- wal.mode
- wal.queue_max_size
- wal.retention_period
- wal.ext.*
-
wal.cleanup_delay¶ The delay in seconds used to prevent the Tarantool garbage collector from immediately removing write-ahead log files after a node restart. This delay eliminates possible erroneous situations when the master deletes WALs needed by replicas after restart. As a consequence, replicas sync with the master faster after its restart and don’t need to download all the data again. Once all the nodes in the replica set are up and running, a scheduled garbage collection is started again even if
wal.cleanup_delayhas not expired.Note
The option has no effect on nodes running as anonymous replicas.
See also: wal.retention_period
Type: numberDefault: 14400Environment variable: TT_WAL_CLEANUP_DELAY
-
wal.dir¶ A directory where write-ahead log (
.xlog) files are stored. A relative path in this option is interpreted as relative toprocess.work_dir.By default, WAL files and snapshots are stored in the same directory. However, you can set different values for the
wal.dirand snapshot.dir options to store them on different physical disks for performance matters.Type: stringDefault: ‘var/lib/{{ instance_name }}’Environment variable: TT_WAL_DIR
-
wal.dir_rescan_delay¶ The time interval in seconds between periodic scans of the write-ahead-log file directory, when checking for changes to write-ahead-log files for the sake of replication or hot standby.
Type: numberDefault: 2Environment variable: TT_WAL_DIR_RESCAN_DELAY
-
wal.max_size¶ The maximum number of bytes in a single write-ahead log file. When a request would cause an
.xlogfile to become larger thanwal.max_size, Tarantool creates a new WAL file.Type: integerDefault: 268435456Environment variable: TT_WAL_MAX_SIZE
-
wal.mode¶ Specify fiber-WAL-disk synchronization mode as:
none: write-ahead log is not maintained. A node withwal.modeset tononecan’t be a replication master.write: fibers wait for their data to be written to the write-ahead log (nofsync(2)).fsync: fibers wait for their data, fsync(2) follows each write(2).
Type: stringDefault: ‘write’Environment variable: TT_WAL_MODE
-
wal.queue_max_size¶ The size of the queue in bytes used by a replica to submit new transactions to a write-ahead log (WAL). This option helps limit the rate at which a replica submits transactions to the WAL.
Limiting the queue size might be useful when a replica is trying to sync with a master and reads new transactions faster than writing them to the WAL.
Note
You might consider increasing the
wal.queue_max_sizevalue in case of large tuples (approximately one megabyte or larger).Type: integerDefault: 16777216Environment variable: TT_WAL_QUEUE_MAX_SIZE
-
wal.retention_period¶ Since: 3.1.0 (Enterprise Edition only)
The delay in seconds used to prevent the Tarantool garbage collector from removing a write-ahead log file after it has been closed. If a node is restarted,
wal.retention_periodcounts down from the last modification time of the write-ahead log file.The garbage collector doesn’t track write-ahead logs that are to be relayed to anonymous replicas, such as:
- Anonymous replicas added as a part of a cluster configuration (see replication.anon).
- CDC (Change Data Capture) that retrieves data using anonymous replication.
In case of a replica or CDC downtime, the required write-ahead logs can be removed. As a result, such a replica needs to be rebootstrapped. You can use
wal.retention_periodto prevent such issues.Note that wal.cleanup_delay option also sets the delay used to prevent the Tarantool garbage collector from removing write-ahead logs. The difference is that the garbage collector doesn’t take into account
wal.cleanup_delayif all the nodes in the replica set are up and running, which may lead to the removal of the required write-ahead logs.Note
box.info.gc().wal_retention_vclock can be used to get a vclock value of the oldest write-ahead log protected by
wal.retention_period.Type: numberDefault: 0Environment variable: TT_WAL_RETENTION_PERIOD
Enterprise Edition
Configuring wal.ext.* parameters is available in the Enterprise Edition only.
This section describes options related to WAL extensions.
-
wal.ext.new¶ Enable storing a new tuple for each CRUD operation performed. The option is in effect for all spaces. To adjust the option for specific spaces, use the wal.ext.spaces option.
Type: booleanDefault: falseEnvironment variable: TT_WAL_EXT_NEW
-
wal.ext.old¶ Enable storing an old tuple for each CRUD operation performed. The option is in effect for all spaces. To adjust the option for specific spaces, use the wal.ext.spaces option.
Type: booleanDefault: falseEnvironment variable: TT_WAL_EXT_OLD
-
wal.ext.spaces¶ Enable or disable storing an old and new tuple in the WAL record for a given space explicitly. The configuration for specific spaces has priority over the configuration in the wal.ext.new and wal.ext.old options.
The option is a key-value pair:
- The key is a space name (string).
- The value is a table that includes two optional boolean options:
oldandnew. The format and the default value of these options are described inwal.ext.oldandwal.ext.new.
Example
In the example, only new tuples are added to the log for the
bandsspace.ext: new: true old: true spaces: bands: old: false
Type: mapDefault: nilEnvironment variable: TT_WAL_EXT_SPACES