Developer’s guide

Developer’s guide

Developer’s guide

This guide explains how to develop environment-independent applications – either cluster-aware or not – and run sample applications provided by the distribution.

An environment-independent application is an assembly (in one directory) of:

  • files with Lua code,
  • tarantool executable,
  • plugged external modules (if necessary).

When started by the tarantool executable, the application provides a service.

The modules are Lua rocks installed into a virtual environment (under the application directory) similar to Python’s virtualenv and Ruby’s bundler.

Such an application has the same structure both in development and production-ready phases. All the application-related code resides in one place, ready to be packed and copied over to any server.

To develop and run an application, in short, you need to go through the following steps:

  1. Set up a development environment from a template for the application.
  2. Develop the application. In case it is a cluster-aware application, implement it in a custom cluster role to initialize the database in a cluster environment.
  3. Plug all the necessary rock modules.
  4. Pack the application and module binaries together with the tarantool executable.
  5. Upload, install, and start corresponding instantiated services on every server dedicated for Tarantool Enterprise.
  6. In case it is a cluster-aware application, deploy the cluster.

The following sections provide details for each of these steps.

Setting up development environments from templates

Tarantool Enterprise provides you with templates to help set up your application development environment for both cluster-aware and plain (e.g., Tarantool as a proxy to third-party databases) application use cases.

To set up a development environment, in any directory say:

$ tarantoolapp create --template [plain|cluster] --name <app_name>

where specify:

  • the plain template to develop an application for single or multiple independent Tarantool instances; or
  • the cluster template to develop a cluster-aware application.

The script will automatically set up a Git repository in a new <app_name>/ directory, tag it with version 0.1.0, and put the necessary files into it.

In this Git repository, you can develop the application, plug the necessary modules, and then easily pack everything to deploy on your server(s).

Developing applications

This section describes templates, default files they provide, and introduces the notion of cluster roles that allow you to segregate functionality between instances.

To start developing an application, simply edit the default files provided by the template selected in the previous step.

During development, keep track of the application version.

Plain template

The plain template creates the <app_name>/ directory with the following contents:

  • <app_name>-scm-1.rockspec file where you can specify the application dependencies.
  • deps.sh script that resolves dependencies from the .rockspec file.
  • init.lua file which is the entry point for your application.
  • .git file necessary for a Git repository.
  • .gitignore file to ignore the unnecessary files.

Cluster template

In addition to the files listed in the plain template section, the cluster template contains the following:

  • env.lua file that sets common rock paths so that the application can be started from any directory.
  • custom-role.lua file that is a placeholder for a user-defined cluster role described in the next section.

The entry point file (init.lua) of the cluster template differs from the plain one. Among other things, it loads the cluster module and calls its initialization function:

...
local cluster = require('cluster')
...
cluster.cfg({
  workdir = ...,
  advertise_uri = ...,
  ...
})
...

The cluster.cfg() call renders the instance operable via the administrative console but does not call box.cfg() to configure instances.

Warning

Calling the box.cfg() function is forbidden.

The cluster itself will do it for you when it is time to:

  • bootstrap the current instance once you:
    • run cluster.bootstrap() via the administrative console, or
    • click Create in the web interface;
  • join the instance to an existing cluster once you:
    • run cluster.join_server({uri = 'other_instance_uri'}) via the console, or
    • click Join (an existing replica set) or Create (a new replica set) in the web interface.

Before developing a cluster-aware application, familiarize yourself with the notion of cluster roles described in the next section and make sure to define a custom role to initialize the database for the cluster application.

Defining custom cluster roles

Tarantool Enterprise cluster segregates instance functionality in a role-based way. Cluster roles are Lua modules that implement some instance-specific functions and/or logic.

Since all instances running cluster applications use the same source code and are aware of all the defined roles (and plugged modules), multiple different roles can be dynamically enabled and disabled on any number of instances without restarts even during cluster operation.

Built-in roles

The cluster module comes with two built-in roles that implement automatic sharding:

  • vshard-router that handles the vshard’s compute-intensive workload: routes requests to storage nodes.

  • vshard-storage that handles the vshard’s transaction-intensive workload: stores and manages a subset of a dataset.

    Note

    For more information on sharding, see the vshard module documentation.

With the built-in and custom roles, Tarantool Enterprise allows you to develop applications with separated compute and transaction handling. Later, the relevant workload-specific roles can be enabled on different instances running on physical servers with workload-dedicated hardware.

Neither vshard-router nor vshard-storage manage spaces, indexes, or formats. To start developing an application, edit the custom-role.lua placeholder file: add a box.schema.space.create() call to your first cluster role.

Additionally, you can implement several such roles to:

  • define stored procedures;
  • implement functionality on top of vshard;
  • go without vshard at all;
  • implement one or multiple supplementary services such as e-mail notifier, replicator, etc.

Implementing and registering custom roles

To implement a custom cluster role, do the following:

  1. Register the new role in the cluster by modifying the cluster.cfg() call in the init.lua entry point file:

    ...
    local cluster = require('cluster')
    ...
    cluster.cfg({
      workdir = ...,
      advertise_uri = ...,
      roles = {'custom-role'},
    })
    ...
    

    where custom-role is the name of the Lua module to be loaded.

  2. Implement the role in a file with the appropriate name (custom-role.lua). For example:

    #!/usr/bin/env tarantool
    -- Custom role implementation
    local role_name = 'custom-role'
    
    local function init()
    ...
    end
    
    local function stop()
    ...
    end
    
    return {
        role_name = role_name,
        init = init,
        stop = stop,
    }
    

    Where the role_name may differ from the module name passed to the cluster.cfg() function. If the role_name variable is not specified, the module name is the default value.

    Note

    Role names must be unique as it is impossible to register multiple roles with the same name.

The role module does not have required functions but the cluster may execute the following ones during the role’s life cycle:

  • init() is the role’s initialization function.

    Inside the function’s body you can call any box functions: create spaces, indexes, grant permissions, etc. Here is what the initialization function may look like:

    local function init(opts)
        -- The cluster passes an 'opts' Lua table containing an 'is_master' flag.
        if opts.is_master then
            local customer = box.schema.space.create('customer',
                { if_not_exists = true }
            )
            customer:format({
                {'customer_id', 'unsigned'},
                {'bucket_id', 'unsigned'},
                {'name', 'string'},
            })
            customer:create_index('customer_id', {
                parts = {'customer_id'},
                if_not_exists = true,
            })
        end
    end
    

    Note

    The function’s body is wrapped in a conditional statement that lets you call box functions on masters only. This protects against replication collisions as data propagates to replicas automatically.

  • stop() is the role’s termination function. Implement it if initialization starts a fiber that has to be stopped or does any job that has to be undone on termination.

  • validate_config() and apply_config() are validation and application functions that make custom roles configurable. Implement them if some configuration data has to be stored cluster-wide.

Next, get a grip on the role’s life cycle to implement the necessary functions.

Role’s life cycle and the order of function execution

The cluster displays all custom role names along with the built-in vshard ones in the web interface. Cluster administrators can enable and disable them for particular instances either via the web interface or cluster public API. For example:

cluster.admin.edit_replicaset('replicaset-uuid', {roles = {'vshard-router', 'custom-role'}})

If multiple roles are enabled on an instance at the same time, the cluster first initializes the built-in roles (if any) and then the custom ones (if any) in the order the latter were listed in cluster.cfg().

The cluster calls the role’s functions in the following circumstances:

  • The init() function, typically, once: either when the role is enabled by the administrator or at the instance restart. Enabling a role once is normally enough.
  • The stop() function – only when the administrator disables the role, not on instance termination.
  • The validate_config() function, first, before the automatic box.cfg() call (database initialization), then – upon every configuration update.
  • The apply_config() function upon every configuration update.

Hence, if the cluster is tasked with performing the following actions, it will execute the functions listed in the following order:

  • Join an instance or create a replica set, both with an enabled role:
    1. validate_config()
    2. init()
    3. apply_config()
  • Restart an instance with an enabled role:
    1. validate_config()
    2. init()
    3. apply_config()
  • Disable role: stop().
  • Upon the cluster.confapplier.patch_clusterwide() call:
    1. validate_config()
    2. apply_config()
  • Upon a triggered failover:
    1. validate_config()
    2. apply_config()

Considering the described behavior:

  • The init() function may:
    • Call box functions.
    • Start a fiber and, in this case, the stop() function should take care of the fiber’s termination.
    • Configure the built-in HTTP server.
    • Execute any code related to the role’s initialization.
  • The stop() finctions must undo any job that has to be undone on role’s termination.
  • The validate_config() function must validate any configuration change.
  • The apply_config() function may execute any code related to a configuration change, e.g., take care of an expirationd fiber.

The validation and application functions together allow you to customize the cluster-wide configuration as described in the next section.

Configuring custom roles

Every instance in the cluster stores a copy of the configuration file in its working directory (configured by cluster.cfg({workdir = ...})):

  • /var/lib/tarantool/<instance_name>/config.yml for instances deployed from RPM packages and managed by systemd.
  • /home/<username>/tarantool_state/var/lib/tarantool/config.yml for instances deployed from archives and managed by tarantoolctl.

The cluster’s configuration is a Lua table. If some application-specific configuration data, e.g., a database schema as defined by DDL (data definition language), has to be stored on every instance in the cluster, you can implement your own API by adding a custom section to the table. The cluster will help you spread it safely across all instances.

Such section goes in parallel (in the same file) with the topology-specific and vshard-specific ones the cluster automatically generates. Unlike the generated, the custom section’s modification, validation, and application logic has to be defined.

The common way is to:

  1. Implement some setters (and getters, if necessary) using the cluster.confapplier public API functions: get_readonly(section), get_deepcopy(section), and patch_clusterwide({section = section_cfg}).

  2. Define, first, the validate_config(conf_new, conf_old), then the apply_config(conf, opts) functions.

    These functions both take two Lua tables as arguments: the ones dubbed with conf are configuration tables as you might have guessed, and opts includes a boolean is_master flag described later.

    Important

    The validate_config() function must detect all configuration problems that may lead to apply_config() errors. For more information, see the next section.

When implementing validation and application functions that call box ones for some reason, the following precautions apply:

  • Due to the role’s life cycle, the cluster does not guarantee an automatic box.cfg() call prior to calling validate_config().

    If the validation function is to call any box functions (e.g., to check a format), make sure the calls are wrapped in a protective conditional statement that checks if box.cfg() has already happened:

    -- Inside the validation function:
    
    if type(box.cfg) == 'function' then
    
        -- Here you can call box functions
    
    end
    
  • Unlike the validation and similar to initialization function, apply_config() can call box functions freely as the cluster applies custom configuration after the automatic box.cfg() call.

    However, creating spaces, users, etc., can cause replication collisions when performed on both master and replica instances simultaneously. The appropriate way is to call such box functions on masters only and let the changes propagate to replicas automatically.

    Upon the apply_config(conf, opts) execution, the cluster passes an is_master flag in the opts table which you can use to wrap collision-inducing box functions in a protective conditional statement:

    -- Inside the configuration application function:
    
    if opts.is_master then
    
        -- Here you can call box functions
    
    end
    

Custom configuration example

Consider the following code as part of the role’s module (custom-role.lua) implementation:

#!/usr/bin/env tarantool
-- Custom role implementation

local cluster = require('cluster')

local role_name = 'custom-role'

-- Modify the config by implementing some setter
local function set_secret(secret)
    local custom_role_cfg = cluster.confapplier.get_deepcopy(role_name) or {}
    custom_role_cfg.secret = secret
    cluster.confapplier.patch_clusterwide({
        [role_name] = custom_role_cfg,
    })
end
-- Validate
local function validate_config(cfg)
    local custom_role_cfg = cfg[role_name] or {}
    if custom_role_cfg.secret ~= nil then
        assert(type(custom_role_cfg.secret) == 'string', 'custom-role.secret must be a string')
    end
    return true
end
-- Apply
local function apply_config(cfg)
    local custom_role_cfg = cfg[role_name] or {}
    local secret = custom_role_cfg.secret or 'default-secret'
    -- Make use of it
end

return {
    role_name = role_name,
    set_secret = set_secret,
    validate_config = validate_config,
    apply_config = apply_config,
}

Once the configuration is customized, do one of the following:

Applying custom role’s configuration

With the implementation showed by the example, you can call the set_secret() function to apply the new configuration via the administrative console or an HTTP endpoint if the role exports one.

The set_secret() function calls cluster.confapplier.patch_clusterwide() which performs a two-phase commit:

  1. It patches the active configuration in memory: copies the table and replaces the "custom-role" section in the copy with the one given by the set_secret() function.
  2. The cluster checks if the new configuration can be applied on all instances except disabled and expelled. All instances subject to update must be healthy and alive according to the membership module.
  3. (Preparation phase) The cluster propagates the patched configuration. Every instance validates it with the validate_config() function of every registered role. Depending on the validation’s result:
    • If successful (i.e., returns true), the instance saves the new configuration to a temporary file named config.prepare.yml within the working directory.
    • (Abort phase) Otherwise, the instance reports an error and all other instances roll back the update: remove the file they may have already prepared.
  4. (Commit phase) Upon successful preparation of all instances, the cluster commits the changes. Every instance:
    1. Creates the active configuration’s hard-link.
    2. Atomically replaces the active one with the prepared. The atomic replacement is indivisible – it can either succeed or fail entirely, never partially.
    3. Calls the apply_config() function of every registered role.

If any of these steps fail, an error pops up in the web interface next to the corresponding instance. The cluster does not handle such errors automatically, they require manual repair.

You will avoid the repair if the validate_config() function can detect all configuration problems that may lead to apply_config() errors.

Using the built-in HTTP server

The cluster launches an httpd server instance during initialization (cluster.cfg()). You can bind a port to the instance via an environmental variable:

-- Get the port from an environmental variable or the default one:
local http_port = os.getenv('HTTP_PORT') or '8080'

local ok, err = cluster.cfg({
   ...
   -- Pass the port to the cluster:
   http_port = http_port,
   ...
})

To make use of the httpd instance, access it and configure routes inside the init() function of some role, e.g. a role that exposes API over HTTP:

local function init(opts)

...

   -- Get the httpd instance:
   local httpd = cluster.service_get('httpd')
   if httpd ~= nil then
       -- Configure a route to, for example, metrics:
       httpd:route({
               method = 'GET',
               path = '/metrics',
               public = true,
           },
           function(req)
               return req:render({json = stat.stat()})
           end
       )
   end
end

For more information on the usage of Tarantool’s HTTP server, see its documentation.

Implementing authorization in the web interface

To implement authorization in the web interface of every instance in Tarantool cluster:

  1. Implement a new, say, auth module with a check_password function. It should check the credentials of any user trying to log in to the web interface.

    The check_password function accepts a username and password and returns an authentication success or failure.

    -- auth.lua
    
    -- Add a function to check the credentials
    local function check_password(username, password)
    
        -- Check the credentials any way you like
    
        -- Return an authentication success or failure
        if not ok then
            return false
        end
        return true
    end
    ...
    

    If you run an LDAP server in your organization, you can connect Tarantool Enterprise to it and let it handle the authorization. In this case add the ldap module to the .rockspec file as a dependency and consider implementing the check_password function the following way:

    -- auth.lua
    
    -- Require the LDAP module at the start of the file
    local ldap = require('ldap')
    ...
    -- Add a function to check the credentials
    local function check_password(username, password)
    
        -- Configure the necessary LDAP parameters
        local user = string.format("cn=%s,ou=superheros,dc=glauth,dc=com", username)
    
        -- Connect to the LDAP server
        local ld, err = ldap.open("localhost:3893", user, password)
    
        -- Return an authentication success or failure
        if not ld then
            return false
        end
        return true
    end
    ...
    
  2. Pass the implemented auth module name as a parameter to cluster.cfg(), so the cluster can use it:

    -- init.lua
    
    local ok, err = cluster.cfg({
        auth_backend_name = 'auth',
        -- The cluster will automatically call 'require()' on the 'auth' module.
        ...
    })
    

    This adds a Log in button to the upper right corner of the web interface but still lets the unsigned users interact with the interface. This is convenient for testing.

    Note

    Also, to authorize requests to cluster API, you can use the HTTP basic authorization header.

  3. To require the authorization of every user in the web interface even before the cluster bootstrap, add the following line:

    -- init.lua
    
    local ok, err = cluster.cfg({
        auth_backend_name = 'auth',
        auth_enabled = true,
        ...
    })
    

    With the authentication enabled and the auth module implemented, the user will not be able to even bootstrap the cluster without logging in. After the successful login and bootstrap, the authentication can be enabled and disabled cluster-wide in the web interface and the auth_enabled parameter is ignored.

Application versioning

Tarantool Enterprise understands semantic versioning as described at semver.org. When developing an application, create new Git branches and tag them appropriately. These tags are used to calculate version increments for subsequent packaging.

For example, if your application has version 1.2.1, tag your current branch with 1.2.1 (annotated or not).

To retrieve the current version from Git, say:

$ git describe --long --tags
1.2.1-12-g74864f2

This output shows that we are 12 commits after the version 1.2.1. If we are to package the application at this point, it will have a full version of 1.2.1-12 and its package will be named <app_name>-1.2.1-12.rpm.

Non-semantic tags are prohibited. You will not be able to create a package from a branch with the latest tag being non-semantic.

Once the application is developed, pack it as described below.

Packaging applications

Once custom cluster role(s) are defined and the application is developed, pack it and all its dependencies (module binaries) together with the tarantool executable.

This will allow you to upload, install, and run your application on any server in one go.

To pack the application, say:

$ tarantoolapp pack [rpm|tgz] /path/to/<app_name>

where specify one of the following options:

  • (Recommended) rpm to create an RPM package.
  • tgz to create a tar + gz archive. Choose this option only if you do not have root privileges on servers dedicated for Tarantool Enterprise.

And provide a path to your develoment environment – the Git repository containing the application code.

This will create a package (or compressed archive) named <app_name>-<version_tag>-<number_of_commits> (e.g., myapp-1.2.1-12.rpm) containing your environment-independent application.

Proceed to deploying packaged applications (or archived ones) on your servers.

Deploying packaged applications

To deploy your packaged application, do the following on every server dedicated for Tarantool Enterprise:

  1. Upload the package created in the previous step.

  2. Install:

    $ yum install <app_name>-<version>.rpm
    
  3. Start one or multiple Tarantool instances with the corresponding services as described below.

    • A single instance:

      $ systemctl start <app_name>
      

      This will start an instantiated systemd service that will listen to port 3301.

    • Multiple instances on one or multiple servers:

      $ systemctl start <app_name>@instance_1
      $ systemctl start <app_name>@instance_2
      ...
      $ systemctl start <app_name>@instance_<number>
      

      where <app_name>@instance_<number> is the instantiated service name for systemd with an incremental <number> (unique for every instance) to be added to the 3300 port the instance will listen to (e.g., 3301, 3302, etc.).

  4. In case it is a cluster-aware application, proceed to deploying the cluster.

To stop all services on a server, use the systemctl stop command and specify instance names one by one. For example:

$ systemctl stop <app_name>@instance_1 <app_name>@instance_2 ... <app_name>@instance_<N>

Deploying archived applications

While the RPM package places your application to /usr/share/tarantool/<app_name> on your server by default, the tar + gz archive does not enforce any structure apart from just the <app_name>/ directory, so you are responsible for placing it appropriately.

Note

RPM packages are recommended for deployment. Deploy archives only if you do not have root privileges.

To place and deploy the application, do the following on every server dedicated for Tarantool Enterprise:

  1. Take the tarantoolctl binary from the SDK and upload it anywhere on the server. If you do not have root privileges, a good place is $HOME/bin. Add this path to your ~/.bash_profile:

    $ export PATH="$HOME/bin:$PATH"
    
  2. Configure tarantoolctl to look for Tarantool instances in a certain place. For example, add the following lines to ~/.config/tarantool/tarantool:

    username = "<user>"
    instance_dir = "/home/<user>/apps"
    vinyl_dir = "/home/<user>/tarantool_state/var/lib/tarantool"
    memtx_dir = "/home/<user>/tarantool_state/var/lib/tarantool"
    snap_dir = "/home/<user>/tarantool_state/var/lib/tarantool"
    wal_dir = "/home/<user>/tarantool_state/var/lib/tarantool"
    log = "/home/<user>/tarantool_state/var/log/tarantool"
    pid_file = "/home/<user>/tarantool_state/var/run/tarantool"
    

    where <user> is the username of an account without root privileges, the instance_dir is the directory to unpack applications to, and the rest are state directories.

  3. Upload the archive, decompress, and extract it to the /home/<user>/apps directory:

    $ tar -xzvf <app_name>-<version>.tar.gz -C /home/<user>/apps
    
  4. Start one or multiple Tarantool instances with the corresponding services as described below.

    • A single instance:

      $ tarantoolctl start <app_name>
      
    • Multiple instances on one or multiple servers:

      $ tarantoolctl start <app_name>@instance_1
      $ tarantoolctl start <app_name>@instance_2
      ...
      $ tarantoolctl start <app_name>@instance_<number>
      

      where <number> is the incremental number (unique for every instance) to be added to the 3300 port the instance will listen to (e.g., 3301, 3302, etc.).

      This starts several instances from the same directory /home/<user>/apps/<app_name> but their state files differ depending on the suffix after @. For example, the log file of the first instance will have the following path:

      /home/<user>/tarantool_state/var/log/tarantool/<app_name>.instance_1.log
      
  5. In case it is a cluster-aware application, proceed to deploying the cluster.

To stop all instances on a server, run the following command for every instance:

$ tarantoolctl stop <app_name>[@instance_<number>]

Upgrading code

All instances in the cluster are to run the same code. This includes all the components: custom roles, applications, module binaries, tarantool and tarantoolctl (if necessary) executables.

Pay attention to possible backward incompatibility that any component may introduce. This will help you choose a scenario for an upgrade in production. Keep in mind that you are responsible for code compatibility and handling conflicts should inconsistencies occur.

To upgrade any of the components, prepare a new version of the package (archive):

  1. Update the necessary files in your development environment (directory):
    • Your own source code: custom roles and/or applications.
    • Module binaries.
    • Executables. Replace them with ones from the new bundle.
  2. Increment the version as described in application versioning.
  3. Repack the updated files as described in packaging applications.
  4. Choose an upgrade scenario as described in production upgrade section.

Running sample applications

The distribution package includes sample applications in the example/ directory that showcase basic Tarantool functionality.

Cluster application

The example in the cluster/ directory showcases a simple cluster-aware application. It consists of the following files:

  • init.lua – module containing the cluster.cfg() initialization function.
  • app.lua – role module to contain your stored procedures and API calls.
  • storage.lua – role module defining functions for the database.
  • deps.sh – script to resolve rocks dependencies from an offline repository included in the archive.
  • start.sh – script to start several Tarantool instances.
  • assemble.sh – script to assemble the cluster.
  • test.py – health check test showing how to put and get data from the running application.
  • clean.sh – script to clean the data.

Look through the code in the files to get an understanding of what the application does.

To start the sample application, do the following:

  1. Start several instances:

    $ ./start.sh
    
  2. Assemble the cluster by assigning roles to instances in the web interface or simply run:

    $ ./assemble.sh
    

To check if all the instances are up and running, say:

$ ps x|grep tarantool

To run a basic sanity check on the application, use the provided test script:

$ python test.py

To clean the data after, say:

$ ./clean.sh

To stop the application, say:

$ ./stop.sh

Write-through cache application for PostgreSQL

The example in pg_writethrough_cache/ shows how Tarantool can cache data written through it to a PostgreSQL database to speed up the reads.

The sample application requires a deployed PostgreSQL database and the following rock modules:

$ tarantoolctl rocks install http
$ tarantoolctl rocks install pg
$ tarantoolctl rocks install argparse

Look through the code in the files to get an understanding of what the application does.

To run the application for a local PostgreSQL database, say:

$ tarantool cachesrv.lua --binary-port 3333 --http-port 8888 --database postgresql://localhost/postgres

Write-behind cache application for Oracle

The example in ora-writebehind-cache/ shows how Tarantool can cache writes and queue them to an Oracle database to speed up both writes and reads.

Application requirements

The sample application requires:

  • deployed Oracle database;

  • Oracle tools: Instant Client and SQL Plus, both of version 12.2;

    Note

    In case the Oracle Instant Client errors out on .so files (Oracle’s dynamic libraries), put them to some directory and add it to the LD_LIBRARY_PATH environment variable.

    For example: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$PWD/<path_to_so_files>

  • rock modules listed in the rockspec file.

To install the modules, run the following command in the examples/ora_writebehind_cache directory:

$ tarantoolctl rocks make oracle_rb_cache-0.1.0-1.rockspec

If you do not have a deployed Oracle instance at hand, run a dummy in a Docker container:

  1. In browser, log in to Oracle container registry, click Database, and accept the Oracle’s Enterprise Terms and Restrictions.

  2. In the ora-writebehind-cache/ directory, log in to the repository under the Oracle account, pull, and run an image using the prepared scripts:

    $ docker login container-registry.oracle.com
    Login:
    Password:
    Login Succeeded
    $ docker pull container-registry.oracle.com/database/enterprise:12.2.0.1
    $ docker run -itd \
       -p 1521:1521 \
       -p 5500:5500 \
       --name oracle \
       -v "$(pwd)"/setupdb/configDB.sh:/home/oracle/setup/configDB.sh \
       -v "$(pwd)"/setupdb/runUserScripts.sh:/home/oracle/setup/runUserScripts.sh \
       -v "$(pwd)"/startupdb:/opt/oracle/scripts/startup \
       container-registry.oracle.com/database/enterprise:12.2.0.1
    

When all is set and done, run the example application.

Running write-behind cache

To launch the application, run the following in the examples/ora_writebehind_cache directory:

$ tarantool init.lua

The application supports the following requests:

  • Get: GET http://<host>:8080/account/id;

  • Add: POST http://<host>:8080/account/ with the following data:

    {"clng_clng_id":1,"asut_asut_id":2,"creation_data":"01-JAN-19","navi_user":"userName"}
    
  • Update: POST http://<host>:8080/account/id with the same data as in the add request;

  • Remove: DELETE http://<host>:8080/account/id where id is an account identifier.

Look for sample CURL scripts in the examples/ora_writebehind_cache/testing directory and check the README.md for more information on implementation.

Hello-world application in Docker

The example in the docker/ directory contains a hello-world application that you can pack in a Docker container and run on CentOS 7.

The hello.lua file is the entry point and it is very bare-bones, so you can add your own code here.

  1. To build the container, say:

    $ docker build -t tarantool-enterprise-docker -f Dockerfile ../..
    
  2. To run it:

    $ docker run --rm -t -i tarantool-enterprise-docker