Managing cluster configurations
$ tt cluster COMMAND {APPLICATION[:APP_INSTANCE] | URI} [FILE] [OPTION ...]
tt cluster
manages YAML configurations of Tarantool applications.
This command works both with local YAML files in application directories
and with centralized configuration storages (etcd or Tarantool-based).
COMMAND
is one of the following:
$ tt cluster publish {APPLICATION[:APP_INSTANCE] | URI} [FILE] [OPTION ...]
tt cluster publish
publishes a cluster configuration using an arbitrary YAML file as a source.
tt cluster publish
can modify local cluster configurations stored in
config.yaml
files inside application directories.
To write a configuration to a local config.yaml
, run tt cluster publish
with two arguments:
- the application name.
- the path to a YAML file from which the configuration should be taken.
$ tt cluster publish myapp source.yaml
tt cluster publish
can modify centralized cluster configurations
in storages of both supported types: etcd or a Tarantool-based configuration storage.
To publish a configuration from a file to a centralized configuration storage,
run tt cluster publish
with a URI of this storage’s
instance as the target. For example, the command below publishes a configuration from source.yaml
to a local etcd instance running on the default port 2379
:
$ tt cluster publish "http://localhost:2379/myapp" source.yaml
A URI must include a prefix that is unique for the application. It can also include credentials and other connection parameters. Find the detailed description of the URI format in URI format.
In addition to whole cluster configurations, tt cluster publish
can manage
configurations of specific instances within applications. In this case, it operates
with YAML fragments that describe a single instance configuration section.
For example, the following YAML file can be a source when publishing an instance configuration:
# instance_source.yaml
iproto:
listen:
- uri: 127.0.0.1:3311
To send an instance configuration to a local config.yaml
, run tt cluster publish
with the application:instance
pair as the target argument:
$ tt cluster publish myapp:instance-002 instance_source.yaml
To send an instance configuration to a centralized configuration storage, specify
the instance name in the name
argument of the storage URI:
$ tt cluster publish "http://localhost:2379/myapp?name=instance-002" instance_source.yaml
$ tt cluster show {APPLICATION[:APP_INSTANCE] | URI} [OPTION ...]
tt cluster show
displays a cluster configuration.
tt cluster show
can read local cluster configurations stored in config.yaml
files inside application directories.
To print a local configuration from an application’s config.yaml
, specify the
application name as an argument:
$ tt cluster show myapp
tt cluster show
can display centralized cluster configurations
from configuration storages of both supported types: etcd or a Tarantool-based configuration storage.
To print a cluster configuration from a centralized storage, run tt cluster show
with a storage URI including the prefix identifying the application. For example, to print
myapp
’s configuration from a local etcd storage:
$ tt cluster show "http://localhost:2379/myapp"
In addition to whole cluster configurations, tt cluster show
can display
configurations of specific instances within applications. In this case, it prints
YAML fragments that describe a single instance configuration section.
To print an instance configuration from a local config.yaml
, use the application:instance
argument:
$ tt cluster show myapp:instance-002
To print an instance configuration from a centralized configuration storage, specify
the instance name in the name
argument of the URI:
$ tt cluster show "http://localhost:2379/myapp?name=instance-002"
$ tt cluster replicaset SUBCOMMAND {APPLICATION[:APP_INSTANCE] | URI} [OPTION ...]
tt cluster replicaset
manages instances in a replica set. It supports the following
subcommands:
Important
tt cluster replicaset
works only with centralized cluster configurations.
To manage replica set leaders in clusters with local YAML configurations,
use tt replicaset promote and tt replicaset demote.
$ tt cluster replicaset promote URI INSTANCE_NAME [OPTION ...]
tt cluster replicaset promote
promotes the specified instance,
making it a leader of its replica set.
This command works on Tarantool clusters with centralized configuration and
with failover modes
off
and manual
. It updates the centralized configuration according to
the specified arguments and reloads it:
off
failover mode: the command sets database.mode torw
on the specified instance.Important
If failover is
off
, the command doesn’t consider the modes of other replica set members, so there can be any number of read-write instances in one replica set.manual
failover mode: the command updates the leader option of the replica set configuration. Other instances of this replica set become read-only.
Example:
$ tt cluster replicaset promote "http://localhost:2379/myapp" storage-001-a
$ tt cluster replicaset demote URI INSTANCE_NAME [OPTION ...]
tt cluster replicaset demote
demotes an instance in a replica set.
This command works on Tarantool clusters with centralized configuration and
with failover mode
off
.
Note
In clusters with manual
failover mode, you can demote a read-write instance
by promoting a read-only instance from the same replica set with tt cluster replicaset promote
.
The command sets the instance’s database.mode
to ro
and reloads the configuration.
Important
If failover is off
, the command doesn’t consider the modes of other
replica set members, so there can be any number of read-write instances in one replica set.
The changes that tt cluster replicaset
makes to the configuration storage
occur transactionally. Each call creates a new revision. In case of a revision mismatch,
an error is raised.
If the cluster configuration is distributed over multiple keys in the configuration
storage (for example, in two paths /myapp/config/k1
and /myapp/config/k2
),
the affected instance configuration can be present in more that one of them.
If it is found under several different keys, the command prompts the user to choose
a key for patching. You can skip the selection by adding the -f
/--force
option:
$ tt cluster replicaset promote "http://localhost:2379/myapp" storage-001-a --force
In this case, the command selects the key for patching automatically. A key’s priority
is determined by the detail level of the instance or replica set configuration stored
under this key. For example, when failover is off
, a key with
instance.database
options takes precedence over a key with the only instance
field.
In case of equal priority, the first key in the lexicographical order is patched.
There are three ways to pass the credentials for connecting to the centralized configuration storage. They all apply to both etcd and Tarantool-based storages. The following list shows these ways ordered by precedence, from highest to lowest:
Credentials specified in the storage URI:
https://username:password@host:port/prefix
:$ tt cluster show "http://myuser:p4$$w0rD@localhost:2379/myapp"
tt cluster
options-u
/--username
and-p
/--password
:$ tt cluster show "http://localhost:2379/myapp" -u myuser -p p4$$w0rD
Environment variables
TT_CLI_ETCD_USERNAME
andTT_CLI_ETCD_PASSWORD
:$ export TT_CLI_ETCD_USERNAME=myuser $ export TT_CLI_ETCD_PASSWORD=p4$$w0rD $ tt cluster show "http://localhost:2379/myapp"
If connection encryption is enabled on the configuration storage, pass the required SSL parameters in the URI arguments.
A URI of the cluster configuration storage has the following format:
http(s)://[username:password@]host:port[/prefix][?arguments]
username
andpassword
define credentials for connecting to the configuration storage.prefix
is a base path identifying a specific application in the storage.arguments
defines connection parameters. The following arguments are available:name
– a name of an instance in the cluster configuration.key
– a target configuration key in the specifiedprefix
.timeout
– a request timeout in seconds. Default:3.0
.ssl_key_file
– a path to a private SSL key file.ssl_cert_file
– a path to an SSL certificate file.ssl_ca_file
– a path to a trusted certificate authorities (CA) file.ssl_ca_path
– a path to a trusted certificate authorities (CA) directory.ssl_ciphers
– a colon-separated (:
) list of SSL cipher suites the connection can use (for Tarantool-based storage only).verify_host
– verify the certificate’s name against the host. Defaulttrue
.verify_peer
– verify the peer’s SSL certificate. Defaulttrue
.
-
-u
,
--username
STRING
¶ A username for connecting to the configuration storage.
See also: Authentication.
-
-p
,
--password
STRING
¶ A password for connecting to the configuration storage.
See also: Authentication.
-
--force
¶
Applicable to:
publish
Skip validation when publishing. Default:
false
(validation is enabled).
-
--validate
¶
Applicable to:
show
Validate the printed configuration. Default:
false
(validation is disabled).
-
--with-integrity-check
STRING
¶ Enterprise Edition
This option is supported by the Enterprise Edition only.
Applicable to:
publish
Generate hashes and signatures for integrity checks.