Basic tt migrations tutorial
Example on GitHub: migrations
In this tutorial, you learn to define the cluster data schema using the centralized migration management mechanism implemented in the Enterprise Edition of the tt utility.
The centralized migration mechanism works with Tarantool EE clusters that:
- use etcd as a centralized configuration storage
- use the CRUD module or its Enterprise version for data distribution
First, start up an etcd instance to use as a configuration storage:
$ etcd
etcd runs on the default port 2379.
Optionally, enable etcd authentication by executing the following script:
#!/usr/bin/env bash
etcdctl user add root:topsecret
etcdctl role add app_config_manager
etcdctl role grant-permission app_config_manager --prefix=true readwrite /myapp/
etcdctl user add app_user:config_pass
etcdctl user grant-role app_user app_config_manager
etcdctl auth enable
It creates an etcd user app_user
with read and write permissions to the /myapp
prefix, in which the cluster configuration will be stored. The user’s password is config_pass
.
Note
If you don’t enable etcd authentication, make tt migrations
calls without
the configuration storage credentials.
Initialize a
tt
environment:$ tt init
In the
instances.enabled
directory, create themyapp
directory.Go to the
instances.enabled/myapp
directory and create application files:
instances.yml
:router-001-a: storage-001-a: storage-001-b: storage-002-a: storage-002-b:
config.yaml
:config: etcd: endpoints: - http://localhost:2379 prefix: /myapp/ username: app_user password: config_pass http: request: timeout: 3
myapp-scm-1.rockspec
:package = 'myapp' version = 'scm-1' source = { url = '/dev/null', } dependencies = { 'crud == 1.5.2', } build = { type = 'none'; }
Create the
source.yaml
with a cluster configuration to publish to etcd:Note
This configuration describes a typical CRUD-enabled sharded cluster with one router and two storage replica sets, each including one master and one read-only replica.
credentials: users: client: password: 'secret' roles: [super] replicator: password: 'secret' roles: [replication] storage: password: 'secret' roles: [sharding] iproto: advertise: peer: login: replicator sharding: login: storage sharding: bucket_count: 3000 groups: routers: sharding: roles: [router] roles: [roles.crud-router] replicasets: router-001: instances: router-001-a: iproto: listen: - uri: localhost:3301 advertise: client: localhost:3301 storages: sharding: roles: [storage] roles: [roles.crud-storage] replication: failover: manual replicasets: storage-001: leader: storage-001-a instances: storage-001-a: iproto: listen: - uri: localhost:3302 advertise: client: localhost:3302 storage-001-b: iproto: listen: - uri: localhost:3303 advertise: client: localhost:3303 storage-002: leader: storage-002-a instances: storage-002-a: iproto: listen: - uri: localhost:3304 advertise: client: localhost:3304 storage-002-b: iproto: listen: - uri: localhost:3305 advertise: client: localhost:3305
Publish the configuration to etcd:
$ tt cluster publish "http://app_user:config_pass@localhost:2379/myapp/" source.yaml
The full cluster code is available on GitHub here: migrations.
To perform migrations in the cluster, write them in Lua and publish to the cluster’s etcd configuration storage.
Each migration file must return a Lua table with one object named apply
.
This object has one field – scenario
– that stores the migration function:
local function apply_scenario()
-- migration code
end
return {
apply = {
scenario = apply_scenario,
},
}
The migration unit is a single file: its scenario
is executed as a whole. An error
that happens in any step of the scenario
causes the entire migration to fail.
Migrations are executed in the lexicographical order. Thus, it’s convenient to use filenames that start with ordered numbers to define the migrations order, for example:
000001_create_space.lua
000002_create_index.lua
000003_alter_space.lua
The default location where tt
searches for migration files is /migrations/scenario
.
Create this subdirectory inside the tt
environment. Then, create two migration files:
000001_create_writers_space.lua
: create a space, define its format, and create a primary index.local helpers = require('tt-migrations.helpers') local function apply_scenario() local space = box.schema.space.create('writers') space:format({ {name = 'id', type = 'number'}, {name = 'bucket_id', type = 'number'}, {name = 'name', type = 'string'}, {name = 'age', type = 'number'}, }) space:create_index('primary', {parts = {'id'}}) space:create_index('bucket_id', {parts = {'bucket_id'}}) helpers.register_sharding_key('writers', {'id'}) end return { apply = { scenario = apply_scenario, }, }
Note
Note the usage of the
tt-migrations.helpers
module. In this example, its functionregister_sharding_key
is used to define a sharding key for the space.000002_create_writers_index.lua
: add one more index.local function apply_scenario() local space = box.space['writers'] space:create_index('age', {parts = {'age'}}) end return { apply = { scenario = apply_scenario, }, }
To publish migrations to the etcd configuration storage, run tt migrations publish
:
$ tt migrations publish "http://app_user:config_pass@localhost:2379/myapp"
• 000001_create_writes_space.lua: successfully published to key "000001_create_writes_space.lua"
• 000002_create_writers_index.lua: successfully published to key "000002_create_writers_index.lua"
To apply published migrations to the cluster, run tt migrations apply
providing
a cluster user’s credentials:
$ tt migrations apply "http://app_user:config_pass@localhost:2379/myapp" \
--tarantool-username=client --tarantool-password=secret
Important
The cluster user must have enough access privileges to execute the migrations code.
The output should look as follows:
• router-001:
• 000001_create_writes_space.lua: successfully applied
• 000002_create_writers_index.lua: successfully applied
• storage-001:
• 000001_create_writes_space.lua: successfully applied
• 000002_create_writers_index.lua: successfully applied
• storage-002:
• 000001_create_writes_space.lua: successfully applied
• 000002_create_writers_index.lua: successfully applied
The migrations are applied on all replica set leaders. Read-only replicas receive the changes from the corresponding replica set leaders.
Check the migrations status with tt migration status
:
$ tt migrations status "http://app_user:config_pass@localhost:2379/myapp" \
--tarantool-username=client --tarantool-password=secret
• migrations centralized storage scenarios:
• 000001_create_writes_space.lua
• 000002_create_writers_index.lua
• migrations apply status on Tarantool cluster:
• router-001:
• 000001_create_writes_space.lua: APPLIED
• 000002_create_writers_index.lua: APPLIED
• storage-001:
• 000001_create_writes_space.lua: APPLIED
• 000002_create_writers_index.lua: APPLIED
• storage-002:
• 000001_create_writes_space.lua: APPLIED
• 000002_create_writers_index.lua: APPLIED
To make sure that the space and indexes are created in the cluster, connect to the router instance and retrieve the space information:
$ tt connect myapp:router-001-a
myapp:router-001-a> require('crud').schema('writers')
---
- indexes:
0:
unique: true
parts:
- fieldno: 1
type: number
exclude_null: false
is_nullable: false
id: 0
type: TREE
name: primary
2:
unique: true
parts:
- fieldno: 4
type: number
exclude_null: false
is_nullable: false
id: 2
type: TREE
name: age
format: [{'name': 'id', 'type': 'number'}, {'type': 'number', 'name': 'bucket_id',
'is_nullable': true}, {'name': 'name', 'type': 'string'}, {'name': 'age', 'type': 'number'}]
...
Learn to write and perform data migration in Data migrations with space.upgrade().