Quick start with sharding
For installation instructions, check out the vshard installation manual.
For a pre-configured development cluster, check out the example/
directory in
the vshard repository.
This example includes 5 Tarantool instances and 2 replica sets:
router_1
– arouter
instancestorage_1_a
– astorage
instance, the master of the first replica setstorage_1_b
– astorage
instance, the replica of the first replica setstorage_2_a
– astorage
instance, the master of the second replica setstorage_2_b
– astorage
instance, the replica of the second replica set
All instances are managed using the tt administrative utility.
Change the directory to example/
and use make
to run the development cluster:
$ cd example/
$ make
Essential make
commands you need to know:
make start
– start all Tarantool instancesmake stop
– stop all Tarantool instancesmake logcat
– show logs from all instancesmake enter
– enter the admin console onrouter_1
make clean
– clean up all persistent datamake test
– run the test suite (you can also runtest-run.py
in thetest
directory)make
– executemake stop
,make clean
,make start
andmake enter
For example, to start all instances, use make start
:
$ make start
$ ps x|grep tarantool
46564 ?? Ss 0:00.34 tarantool storage_1_a.lua <running>
46566 ?? Ss 0:00.19 tarantool storage_1_b.lua <running>
46568 ?? Ss 0:00.35 tarantool storage_2_a.lua <running>
46570 ?? Ss 0:00.20 tarantool storage_2_b.lua <running>
46572 ?? Ss 0:00.25 tarantool router_1.lua <running>
To perform commands in the admin console, use the router’s public API:
unix/:./data/router_1.control> vshard.router.info()
---
- replicasets:
ac522f65-aa94-4134-9f64-51ee384f1a54:
replica: &0
network_timeout: 0.5
status: available
uri: storage@127.0.0.1:3303
uuid: 1e02ae8a-afc0-4e91-ba34-843a356b8ed7
uuid: ac522f65-aa94-4134-9f64-51ee384f1a54
master: *0
cbf06940-0790-498b-948d-042b62cf3d29:
replica: &1
network_timeout: 0.5
status: available
uri: storage@127.0.0.1:3301
uuid: 8a274925-a26d-47fc-9e1b-af88ce939412
uuid: cbf06940-0790-498b-948d-042b62cf3d29
master: *1
bucket:
unreachable: 0
available_ro: 0
unknown: 0
available_rw: 3000
status: 0
alerts: []
...
The configuration of a simple sharded cluster can look like this:
local cfg = {
memtx_memory = 100 * 1024 * 1024,
bucket_count = 10000,
rebalancer_disbalance_threshold = 10,
rebalancer_max_receiving = 100,
sharding = {
['cbf06940-0790-498b-948d-042b62cf3d29'] = {
replicas = {
['8a274925-a26d-47fc-9e1b-af88ce939412'] = {
uri = 'storage:storage@127.0.0.1:3301',
name = 'storage_1_a',
master = true
},
['3de2e3e1-9ebe-4d0d-abb1-26d301b84633'] = {
uri = 'storage:storage@127.0.0.1:3302',
name = 'storage_1_b'
}
},
},
['ac522f65-aa94-4134-9f64-51ee384f1a54'] = {
replicas = {
['1e02ae8a-afc0-4e91-ba34-843a356b8ed7'] = {
uri = 'storage:storage@127.0.0.1:3303',
name = 'storage_2_a',
master = true
},
['001688c3-66f8-4a31-8e19-036c17d489c2'] = {
uri = 'storage:storage@127.0.0.1:3304',
name = 'storage_2_b'
}
},
},
},
}
This cluster includes one router
instance and two storage
instances.
Each storage
instance includes one master and one replica.
The sharding
field defines the logical topology of a sharded Tarantool cluster.
All the other fields are passed to box.cfg()
as they are, without modifications.
See the Configuration reference section for details.
On routers, call vshard.router.cfg(cfg)
:
cfg.listen = 3300
-- Start the database with sharding
vshard = require('vshard')
vshard.router.cfg(cfg)
On storages, call vshard.storage.cfg(cfg, instance_uuid)
:
-- Get instance name
local MY_UUID = "de0ea826-e71d-4a82-bbf3-b04a6413e417"
-- Call a configuration provider
local cfg = require('localcfg')
-- Start the database with sharding
vshard = require('vshard')
vshard.storage.cfg(cfg, MY_UUID)
vshard.storage.cfg()
automatically calls box.cfg()
and configures the listen
port and replication parameters.
For a sample configuration, see router.lua
and storage.lua
in the
example/
directory of the vshard repository.