You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Go to file
Mauro Mugnaini 198a14fc59
Merge branch 'master' of
2 years ago
ansible/roles added ansible bootstrap 2 years ago
dynomite First commit 2 years ago
stack First commit 2 years ago
.gitignore Initial commit 2 years ago First commit 2 years ago First commit 2 years ago First commit 2 years ago edited online with Bitbucket 2 years ago

Conductor Setup

Conductor Setup is composed by a Docker image script that should be used to build the autodynomite image, and 3 different Docker Compose Swarm YAML files to deploy the Conductor in HA.

Structure of the project

The AutoDynomite Docker image script file is present in dynomite folder. The Docker Compose Swarm files are present in the stack folder.

Built With


The provided Docker stack files provide the following configuration:

  • 4 Dynomites nodes (2 shards with 1 replication each one, handled by Dynomite directly) based on autodynomite image that is backed by Redis DB in the same container
  • 2 Conductor Server nodes with 2 replicas handled by Swarm
  • 2 Conductor UI nodes with 2 replicas handled by Swarm
  • 1 Elasticsearch node

Build the Docker autodynomite image with the Dockerfile present in the dynomite folder and launch the three Docker Compose Swarm files is sequence:

  • dynomite-swarm.yaml
  • elasticsearch-swarm.yaml
  • conductor-swarm.yaml

The command to be executed should looks like: docker stack deploy -c dynomite-swarm.yaml -c elasticsearch-swarm.yaml -c conductor-swarm.yaml [your stack name]

If you plan to deploy more than 4 nodes for dynomite persistence you should modify the dynomite-swarm.yaml and the seeds.list files as per your needs. The should be left unmodified.

Change log



How to Cite this Software

[Intentionally left blank]


This project is licensed under the EUPL V.1.1 License - see the file for details.

About the gCube Framework

This software is part of the gCubeFramework: an open-source software toolkit used for building and operating Hybrid Data Infrastructures enabling the dynamic deployment of Virtual Research Environments by favouring the realisation of reuse oriented policies.

The projects leading to this software have received funding from a series of European Union programmes see


[Intentionally left blank]