conductor-setup/README.md

78 lines
4.0 KiB
Markdown
Raw Normal View History

2020-10-26 13:27:25 +01:00
# Conductor Setup
**Conductor Setup** is composed of a set of ansible roles and a playbook named site.yaml useful for deploying a docker swarm running Conductor microservice orchestrator by [Netflix OSS](https://netflix.github.io/conductor/).
2020-10-26 13:27:25 +01:00
## Structure of the project
The AutoDynomite Docker image script file is present in `dynomite` folder.
The Docker Compose Swarm files are present in the `stack` folder.
## Built With
* [Ansible](https://www.ansible.com)
2020-10-26 13:27:25 +01:00
* [Docker](https://www.docker.com)
## Documentation
The provided Docker stack files provide the following configuration:
2020-10-26 13:44:18 +01:00
2020-10-26 13:42:11 +01:00
- 2 Conductor Server nodes with 2 replicas handled by Swarm
- 2 Conductor UI nodes with 2 replicas handled by Swarm
- 1 Elasticsearch node
- 1 Database node that can be postgres (default), mysql or mariadb
- 2 Optional replicated instances of PyExec worker running the tasks Http, Eval and Shell
- 1 Optional cluster-replacement service that sets up a networking environment (including on HAProxy LB) similar to the one available in production. By default it's disabled.
The default configuration is run with the command: `ansible-playbook site.yaml`
Files for swarms and configurations will be generated inside a temporary folder named /tmp/conductor_stack on the local machine.
In order to change destination folder use the switch: `-e target_path=anotherdir`
If you only want to review the generated files run the command `ansible-playbook site.yaml -e dry=true`
In order to switch between postgres and mysql specify the db on the proper variable: `-e db=mysql`
2021-02-16 12:26:19 +01:00
In order to connect to an external postgres withouth pulling up a service in the stack use -e db=remote-postgres
In order to skip worker creation specify the noworker variable: `-e noworker=true`
In order to enable the cluster replacement use the switch: `-e cluster_replacement=true`
2021-02-16 12:26:19 +01:00
If you run the stack in production behind a load balanced setup ensure the variable cluster_check is true: `ansible-playbook site.yaml -e cluster_check=true`
In order to avoid confusion of all variables a proper site.yaml has been created for prod environment. Run `ansible-playbook site-prod.yaml` and this should deploy a stack with prod setup.
Other setting can be fine tuned by checking the variables in the proper roles which are:
- *common*: defaults and common tasks
- *conductor*: defaults, templates and tasks for generating swarm files for replicated conductor-server and ui.
- *elasticsearch*: defaults, templates and task for starting in the swarm a single instance of elasticsearch
- *mysql*: defaults, template and tasks for starting in the swarm a single instance of mysql/mariadb
- *postgres*: defaults, templates and tasks for starting in the swarm a single instance of postgres
- *workers*: defaults and task for starting in the swarm a replicated instance of the workers for executing HTTP, Shell, Eval operations.
2020-10-26 13:27:25 +01:00
## Examples
The following example runs as user username on the remote hosts listed in hosts a swarm with 2 replicas of conductor server and ui, 1 postgres, 1 elasticsearch, 2 replicas of simple PyExec, an HAProxy that acts as load balancer.
`ansible-playbook -u username -i hosts site.yaml -e target_path=/tmp/conductor -e cluster_replacement=true`
2020-10-26 13:27:25 +01:00
## Change log
See [CHANGELOG.md](CHANGELOG.md).
## Authors
* **Marco Lettere** ([Nubisware S.r.l.](http://www.nubisware.com))
* **Mauro Mugnaini** ([Nubisware S.r.l.](http://www.nubisware.com))
## How to Cite this Software
[Intentionally left blank]
## License
This project is licensed under the EUPL V.1.1 License - see the [LICENSE.md](LICENSE.md) file for details.
## About the gCube Framework
This software is part of the [gCubeFramework](https://www.gcube-system.org/ "gCubeFramework"): an
open-source software toolkit used for building and operating Hybrid Data
Infrastructures enabling the dynamic deployment of Virtual Research Environments
by favouring the realisation of reuse oriented policies.
The projects leading to this software have received funding from a series of European Union programmes see [FUNDING.md](FUNDING.md)
## Acknowledgments
[Intentionally left blank]