diff --git a/CHANGELOG.md b/CHANGELOG.md
index 3a3f3c0..85ecdd8 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -2,6 +2,12 @@ This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.htm
# Changelog for "conductor-setup"
+## [v0.2.0]
+- Factored out workflows
+- Added relational persistence
+- Removed Dynomite
+- Added workers
+
## [v0.1.0-SNAPSHOT]
- First release. It provides Conductor HA with 2 instances. (#19689).
diff --git a/README.md b/README.md
index 3101804..25c1460 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,6 @@
# Conductor Setup
-**Conductor Setup** is composed by a Docker image script that should be used to build the `autodynomite` image, and 3 different Docker Compose Swarm YAML files to deploy the Conductor in HA.
+**Conductor Setup** is composed of a set of ansible roles and a playbook named site.yaml useful for deploying a docker swarm running Conductor microservice orchestrator by [Netflix OSS](https://netflix.github.io/conductor/).
## Structure of the project
@@ -9,27 +9,37 @@ The Docker Compose Swarm files are present in the `stack` folder.
## Built With
+* [Ansible](https://www.ansible.com)
* [Docker](https://www.docker.com)
## Documentation
The provided Docker stack files provide the following configuration:
-- 4 Dynomites nodes (2 shards with 1 replication each one, handled by Dynomite directly) based on `autodynomite` image that is backed by Redis DB in the same container
- 2 Conductor Server nodes with 2 replicas handled by Swarm
- 2 Conductor UI nodes with 2 replicas handled by Swarm
- 1 Elasticsearch node
-
-Build the Docker `autodynomite` image with the `Dockerfile` present in the dynomite folder and launch the three Docker Compose Swarm files is sequence:
-
-- dynomite-swarm.yaml
-- elasticsearch-swarm.yaml
-- conductor-swarm.yaml
-
-The command to be executed should looks like: `docker stack deploy -c dynomite-swarm.yaml -c elasticsearch-swarm.yaml -c conductor-swarm.yaml [your stack name]`
-
-If you plan to deploy **more than 4 nodes** for dynomite persistence you should modify the `dynomite-swarm.yaml` and the `seeds.list` files as per your needs.
-The `conductor-swarm-config.properties` should be left unmodified.
+- 1 Database node that can be postgres (default), mysql or mariadb
+- 2 Optional replicated instances of PyExec worker running the tasks Http, Eval and Shell
+- 1 Optional cluster-replacement service that sets up a networking environment (including on HAProxy LB) similar to the one available in production. By default it's disabled.
+
+The default configuration is run with the command: `ansible-playbook site.yaml`
+Files for swarms and configurations will be generated inside a temporary folder named /tmp/conductor_stack on the local machine.
+In order to change destination folder use the switch: `-e target_path=anotherdir`
+If you only want to review the generated files run the command `ansible-playbook site.yaml -e dry=true`
+In order to switch between postgres and mysql specify the db on the proper variable: `-e db=mysql`
+In order to skip worker creation specify the noworker varaible: `-e noworker=true`
+In order to enable the cluster replacement use the switch: `-e cluster_replacement=true`
+If you run the stack in production behind a load balenced setup ensure the variable cluster_check is true: `ansible-playbook site.yaml -e cluster_check=true`
+
+Other setting can be fine tuned by checking the variables in the proper roles which are:
+
+- *common*: defaults and common tasks
+- *conductor*: defaults, templates and tasks for generating swarm files for replicated conductor-server and ui.
+- *elasticsearch*: defaults, templates and task for starting in the swarm a single instance of elasticsearch
+- *mysql*: defaults, template and tasks for starting in the swarm a single instance of mysql/mariadb
+- *postgres*: defaults, templates and tasks for starting in the swarm a single instance of postgres
+- *workers*: defaults and task for starting in the swarm a replicated instance of the workers for executing HTTP, Shell, Eval operations.
## Change log
diff --git a/ansible/roles/ansible-role-lr62-workflows/defaults/main.yaml b/ansible/roles/ansible-role-lr62-workflows/defaults/main.yaml
deleted file mode 100644
index b1ed55b..0000000
--- a/ansible/roles/ansible-role-lr62-workflows/defaults/main.yaml
+++ /dev/null
@@ -1,22 +0,0 @@
----
-target_path: "/tmp/lr62workflows"
-conductor_server: "http://conductor-dev.int.d4science.net/api"
-conductor_workflowdef_endpoint: "{{ conductor_server }}/metadata/workflow"
-conductor_taskdef_endpoint: "{{ conductor_server }}/metadata/taskdefs"
-workflows:
- - create-user-add-to-vre
- - group_deleted
- - user-group_created
- - user-group-role_created
- - group_created
- - invitation-accepted
- - user-group_deleted
- - user-group-role_deleted
- - delete-user-account
-#keycloak_realm: d4science
-keycloak_host: "https://accounts.dev.d4science.org/auth"
-keycloak: "{{ keycloak_host }}/realms"
-keycloak_admin: "{{ keycloak_host }}/admin/realms"
-keycloak_auth: "c93501bd-abeb-4228-bc28-afac38877338"
-liferay: "https://next.d4science.org/api/jsonws"
-liferay_auth: "bm90aWZpY2F0aW9uc0BkNHNjaWVuY2Uub3JnOmdjdWJlcmFuZG9tMzIx"
diff --git a/ansible/roles/ansible-role-lr62-workflows/tasks/main.yaml b/ansible/roles/ansible-role-lr62-workflows/tasks/main.yaml
deleted file mode 100644
index d2920e5..0000000
--- a/ansible/roles/ansible-role-lr62-workflows/tasks/main.yaml
+++ /dev/null
@@ -1,31 +0,0 @@
----
-- name: Generate taskdefs
- template:
- src: "templates/taskdefs.json.j2"
- dest: "{{ target_path }}/taskdefs.json"
-
-- name: Upload task definitions
- uri:
- url: "{{ conductor_taskdef_endpoint }}"
- method: POST
- src: "{{ target_path }}/taskdefs.json"
- body_format: json
- status_code: 204
- follow_redirects: yes
-
-- name: Generate workflows
- template:
- src: "templates/{{ item }}.json.j2"
- dest: "{{ target_path }}/{{ item }}.json"
- loop: "{{ workflows }}"
-
-- name: Upload workflows
- uri:
- url: "{{ conductor_workflowdef_endpoint }}"
- method: POST
- src: "{{ target_path }}/{{ item }}.json"
- body_format: json
- follow_redirects: yes
- status_code: [200, 204, 409]
- loop:
- "{{ workflows }}"
diff --git a/ansible/roles/ansible-role-lr62-workflows/templates/create-user-add-to-vre.json.j2 b/ansible/roles/ansible-role-lr62-workflows/templates/create-user-add-to-vre.json.j2
deleted file mode 100644
index 58542e7..0000000
--- a/ansible/roles/ansible-role-lr62-workflows/templates/create-user-add-to-vre.json.j2
+++ /dev/null
@@ -1,167 +0,0 @@
-{
- "ownerApp" : "Orchestrator",
- "name" : "create-user-add-to-vre",
- "createBy" : "Marco Lettere",
- "description": "Batch create a user with a membership in a specific group",
- "version" : 1,
- "ownerEmail" : "m.lettere@gmail.com",
- "inputParameters" : ["user", "first-name", "last-name", "email", "password", "group"],
- "tasks" : [
- {
- "name": "LAMBDA_TASK",
- "taskReferenceName": "init",
- "type": "LAMBDA",
- "inputParameters": {
- "keycloak": "{{ keycloak }}",
- "keycloak_admin" : "{{ keycloak_admin }}",
- "group" : "${workflow.input.group}",
- "scriptExpression": "var path = $.group.split('%2F').slice(1); return { 'tree' : Java.to(path, 'java.lang.Object[]'), 'name' : path.slice(path.length-1)[0]}"
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "authorize",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.keycloak}/protocol/openid-connect/token",
- "method" : "POST",
- "headers" : {
- "Accept" : "application/json"
- },
- "body" : {
- "client_id" : "orchestrator",
- "client_secret" : "{{ keycloak_auth }}",
- "grant_type" : "client_credentials"
- }
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "create_user",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.keycloak_admin}/users",
- "expect" : 201,
- "method" : "POST",
- "body" : {
- "username": "${workflow.input.user}",
- "firstName": "${workflow.input.first-name}",
- "lastName": "${workflow.input.last-name}",
- "email": "${workflow.input.email}",
- "credentials": [
- {
- "temporary": true,
- "type": "password",
- "value": "${workflow.input.password}"
- }
- ],
- "requiredActions": ["UPDATE_PASSWORD"],
- "emailVerified": true,
- "enabled": true
- },
- "headers" : {
- "Authorization" : "Bearer ${authorize.output.body.access_token}",
- "Content-Type" : "application/json"
- }
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "lookup_user",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.keycloak_admin}/users?username=${workflow.input.user}",
- "method" : "GET",
- "headers" : {
- "Authorization" : "Bearer ${authorize.output.body.access_token}",
- "Accept" : "application/json"
- }
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "lookup_client",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.keycloak_admin}/clients",
- "params" : { "clientId" : "${workflow.input.group}"},
- "method" : "GET",
- "headers" : {
- "Authorization" : "Bearer ${authorize.output.body.access_token}",
- "Accept" : "application/json"
- }
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "get_client_roles",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.keycloak_admin}/clients/${lookup_client.output.body[0].id}/roles",
- "expect" : [200, 404],
- "method" : "GET",
- "headers" : {
- "Authorization" : "Bearer ${authorize.output.body.access_token}",
- "Accept" : "application/json"
- }
- }
- },
- {
- "name" : "check_role_existance",
- "taskReferenceName" : "check_role_existance",
- "type" : "DECISION",
- "inputParameters" :{
- "previous_outcome" : "${get_client_roles.output.status}"
- },
- "caseValueParam" : "previous_outcome",
- "decisionCases" : {
- "200" : [
- {
- "name": "LAMBDA_TASK",
- "taskReferenceName": "select_role",
- "type": "LAMBDA",
- "inputParameters": {
- "role": "${workflow.input.role}",
- "roles" : "${get_client_roles.output.body}",
- "scriptExpression": "for(var i=0; i < $.roles.length;i++){if($.roles[i]['name'] == 'Member') return Java.to([$.roles[i]], 'java.lang.Object[]')}"
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "look_up_groups",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.keycloak_admin}/groups?search=${init.output.result.name}",
- "headers" : {
- "Authorization" : "Bearer ${authorize.output.body.access_token}",
- "Accept" : "application/json"
- }
- }
- },
- {
- "name": "LAMBDA_TASK",
- "taskReferenceName": "extract_group",
- "type": "LAMBDA",
- "inputParameters": {
- "tree" : "${init.output.result.tree}",
- "groups" : "${look_up_groups.output.body}",
- "scriptExpression": "function selectByPath(groups, path, level) { for (var i=0; i < groups.length; i++) {if (groups[i].name === path[level]) {if (level === path.length - 1) return groups[i];return selectByPath(groups[i].subGroups, path, level+1)}} return null; } return { 'group' : selectByPath($.groups, $.tree, 0)}"
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "assign_user_to_group",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.keycloak_admin}/users/${lookup_user.output.body[0].id}/groups/${extract_group.output.result.group.id}",
- "method" : "PUT",
- "headers" : {
- "Authorization" : "Bearer ${authorize.output.body.access_token}"
- }
- }
- }
- ]
- }
- }
- ]
-}
diff --git a/ansible/roles/ansible-role-lr62-workflows/templates/delete-user-account.json.j2 b/ansible/roles/ansible-role-lr62-workflows/templates/delete-user-account.json.j2
deleted file mode 100644
index 8d1481a..0000000
--- a/ansible/roles/ansible-role-lr62-workflows/templates/delete-user-account.json.j2
+++ /dev/null
@@ -1,181 +0,0 @@
-{
- "ownerApp" : "Orchestrator",
- "name" : "delete-user-account",
- "createBy" : "Marco Lettere",
- "description": "Handle Admin events from Keycloak",
- "version" : 1,
- "ownerEmail" : "m.lettere@gmail.com",
- "inputParameters" : [ "userid" ],
- "tasks" : [
- {
- "name": "LAMBDA_TASK",
- "taskReferenceName": "init",
- "type": "LAMBDA",
- "inputParameters": {
- "keycloak": "{{ keycloak }}/${workflow.input.realm}",
- "keycloak_admin" : "{{ keycloak_admin }}/${workflow.input.realm}",
- "liferay": "{{ liferay }}",
- "liferay_auth": "{{ liferay_auth }}",
- "keycloak_userid" : "${workflow.input.userid}",
- "scriptExpression": "1 == 1"
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "authorize",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.keycloak}/protocol/openid-connect/token",
- "method" : "POST",
- "headers" : {
- "Accept" : "application/json"
- },
- "body" : {
- "client_id" : "orchestrator",
- "client_secret" : "{{ keycloak_auth }}",
- "grant_type" : "client_credentials"
- }
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "lookup_user",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.keycloak_admin}/users/${init.input.keycloak_userid}",
- "method" : "GET",
- "headers" : {
- "Authorization" : "Bearer ${authorize.output.body.access_token}",
- "Accept" : "application/json"
- }
- }
- },
- {
- "name" : "fork_join",
- "taskReferenceName" : "global_delete_user",
- "type" : "FORK_JOIN",
- "forkTasks" : [
- [
- {
- "name" : "pyrest",
- "taskReferenceName" : "lookup_lr_company",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.liferay}/company/get-company-by-web-id",
- "method" : "GET",
- "params" : { "webId" : "liferay.com"},
- "headers" : {
- "Authorization" : "Basic ${init.input.liferay_auth}",
- "Accept" : "application/json"
- }
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "lookup_lr_user_by_screenname",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.liferay}/user/get-user-by-screen-name",
- "method" : "GET",
- "params" : {
- "companyId" : "${lookup_lr_company.output.body.companyId}",
- "screenName" : "${lookup_user.output.body.username}"
- },
- "headers" : {
- "Authorization" : "Basic ${init.input.liferay_auth}",
- "Accept" : "application/json"
- }
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "lookup_lr_user_groups",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.liferay}/group/get-user-sites-groups",
- "method" : "GET",
- "params" : {
- "classNames" : "[\"com.liferay.portal.model.Group\"]",
- "userId" : "${lookup_lr_user_by_screenname.output.body.userId}",
- "max" : "-1"
- },
- "headers" : {
- "Authorization" : "Basic ${init.input.liferay_auth}",
- "Accept" : "application/json"
- }
- }
- },
- {
- "name": "LAMBDA_TASK",
- "taskReferenceName": "build_delete_group_tasks",
- "type": "LAMBDA",
- "inputParameters": {
- "groups" : "${lookup_lr_user_groups.output.body.*.groupId}",
- "userId" : "${lookup_lr_user_by_screenname.output.body.userId}",
- "scriptExpression": "inputs = {}; tasks = []; for(var i=0;i<$.groups.length;i++){tasks.push({'name': 'pyrest','type' : 'SIMPLE','taskReferenceName' : 'del-' + i});inputs['del-'+i] = {'url' : '${init.input.liferay}/user/unset-group-users?userIds=' + $.userId + '&groupId=' + $.groups[i],'method' : 'POST','headers' : {'Authorization' : 'Basic ' + '${init.input.liferay_auth}', 'Accept' : 'application/json'}}}; return { 'tasks' : Java.to(tasks, 'java.util.Map[]'), 'inputs' : inputs};"
- }
- },
- {
- "name" : "fork_dynamic",
- "type" : "FORK_JOIN_DYNAMIC",
- "taskReferenceName" : "parallel_delete_group",
- "inputParameters" : {
- "tasks" : "${build_delete_group_tasks.output.result.tasks}",
- "inputs" : "${build_delete_group_tasks.output.result.inputs}"
- },
- "dynamicForkTasksParam": "tasks",
- "dynamicForkTasksInputParamName": "inputs"
- },
- {
- "name" : "join",
- "type" : "JOIN",
- "taskReferenceName" : "join_parallel_group_deletion"
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "delete_lr_user",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.liferay}/user/delete-user",
- "method" : "POST",
- "params" : {
- "userId" : "${lookup_lr_user_by_screenname.output.body.userId}"
- },
- "headers" : {
- "Authorization" : "Basic ${init.input.liferay_auth}",
- "Accept" : "application/json"
- }
- }
- },
- {
- "name": "LAMBDA_TASK",
- "taskReferenceName": "lr_final_task",
- "type": "LAMBDA",
- "inputParameters" : {
- "scriptExpression" : "1 == 1"
- }
- }
- ]
- ]
- },
- {
- "name" : "join",
- "type" : "JOIN",
- "taskReferenceName" : "global_delete_user_join",
- "joinOn": [ "lr_final_task"]
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "delete_keycloak_user",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.keycloak_admin}/users/${init.input.keycloak_userid}",
- "method" : "DELETE",
- "headers" : {
- "Authorization" : "Bearer ${authorize.output.body.access_token}",
- "Accept" : "application/json"
- }
- }
- }
- ]
-}
diff --git a/ansible/roles/ansible-role-lr62-workflows/templates/group_created.json.j2 b/ansible/roles/ansible-role-lr62-workflows/templates/group_created.json.j2
deleted file mode 100644
index e7cd250..0000000
--- a/ansible/roles/ansible-role-lr62-workflows/templates/group_created.json.j2
+++ /dev/null
@@ -1,343 +0,0 @@
-{
- "ownerApp" : "Orchestrator",
- "name" : "group_created",
- "createBy" : "Marco Lettere",
- "description": "Handle workflow related to Portal event group_created",
- "version" : 1,
- "ownerEmail" : "marco.lettere@nubisware.com",
- "inputParameters" : ["user", "group"],
- "tasks" : [
- {
- "name": "LAMBDA_TASK",
- "taskReferenceName": "init",
- "type": "LAMBDA",
- "inputParameters": {
- "keycloak": "{{ keycloak }}",
- "keycloak_admin" : "{{ keycloak_admin }}",
- "clientId" : "${workflow.input.group}",
- "scriptExpression": "var tree = $.clientId.split('%2F'); return { 'tree' : tree, 'child': tree[tree.length-1], 'append' : tree.slice(0,-1).join('/'), 'name' : tree.join('/')}"
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "authorize",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.keycloak}/protocol/openid-connect/token",
- "method" : "POST",
- "headers" : {
- "Accept" : "application/json"
- },
- "body" : {
- "client_id" : "orchestrator",
- "client_secret" : "{{ keycloak_auth }}",
- "grant_type" : "client_credentials"
- }
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "lookup_user",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.keycloak_admin}/users?username=${workflow.input.user}",
- "method" : "GET",
- "headers" : {
- "Authorization" : "Bearer ${authorize.output.body.access_token}",
- "Accept" : "application/json"
- }
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "create_client",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.keycloak_admin}/clients",
- "body" : {
- "clientId": "${init.input.clientId}",
- "name": "${init.output.result.name}",
- "description": "Client representation for ${init.output.result.name} context",
- "rootUrl": "http://localhost${init.output.result.name}",
- "enabled": true,
- "serviceAccountsEnabled": true,
- "standardFlowEnabled": true,
- "authorizationServicesEnabled": true,
- "publicClient": false,
- "protocol": "openid-connect"
- },
- "method" : "POST",
- "headers" : {
- "Authorization" : "Bearer ${authorize.output.body.access_token}",
- "Content-Type" : "application/json"
- }
- }
- },
- {
- "name" : "fork_join",
- "taskReferenceName" : "fork_role_creation",
- "type" : "FORK_JOIN",
- "forkTasks" : [
- [{
- "name" : "pyrest",
- "taskReferenceName" : "create_role_member",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${create_client.output.headers.location}/roles",
- "body" : {
- "clientRole" : true, "name" : "Member", "description" : "Simple membership for ${init.output.result.name}"
- },
- "method" : "POST",
- "headers" : {
- "Authorization" : "Bearer ${authorize.output.body.access_token}",
- "Content-Type" : "application/json"
- }
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "get_back_role_member",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${create_role_member.output.headers.location}",
- "method" : "GET",
- "headers" : {
- "Authorization" : "Bearer ${authorize.output.body.access_token}",
- "Accept" : "application/json"
- }
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "create_kc_group",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.keycloak_admin}/groups",
- "body" : {
- "name" : "${init.output.result.child}"
- },
- "method" : "POST",
- "headers" : {
- "Authorization" : "Bearer ${authorize.output.body.access_token}",
- "Content-Type" : "application/json"
- }
- }
- },
- {
- "name" : "pyrest",
- "taskReferenceName" : "list_kc_groups",
- "type" : "SIMPLE",
- "inputParameters" : {
- "url" : "${init.input.keycloak_admin}/groups",
- "method" : "GET",
- "headers" : {
- "Authorization" : "Bearer ${authorize.output.body.access_token}",
- "Accept" : "application/json"
- }
- }
- },
- {
- "name": "LAMBDA_TASK",
- "taskReferenceName": "prepare",
- "type": "LAMBDA",
- "inputParameters": {
- "append" : "${init.output.result.append}",
- "location" : "${create_kc_group.output.headers.location}",
- "client_location" : "${create_client.output.headers.location}",
- "groups" : "${list_kc_groups.output.body}",
- "scriptExpression": "var newid=$.location.split('/').pop(); var client_id = $.client_location.split('/').pop(); function recurse(inp){for(var i=0;i /dynomite/auto_dynomite.yml
-
-#Start redis server on 22122
-redis-server --port 22122 &
-
-src/dynomite --conf-file=/dynomite/auto_dynomite.yml #-v11
diff --git a/ansible/roles/cluster-replacement/defaults/main.yml b/roles/cluster-replacement/defaults/main.yml
similarity index 100%
rename from ansible/roles/cluster-replacement/defaults/main.yml
rename to roles/cluster-replacement/defaults/main.yml
diff --git a/ansible/roles/cluster-replacement/tasks/main.yml b/roles/cluster-replacement/tasks/main.yml
similarity index 100%
rename from ansible/roles/cluster-replacement/tasks/main.yml
rename to roles/cluster-replacement/tasks/main.yml
diff --git a/ansible/roles/cluster-replacement/templates/haproxy-docker-swarm.yaml.j2 b/roles/cluster-replacement/templates/haproxy-docker-swarm.yaml.j2
similarity index 100%
rename from ansible/roles/cluster-replacement/templates/haproxy-docker-swarm.yaml.j2
rename to roles/cluster-replacement/templates/haproxy-docker-swarm.yaml.j2
diff --git a/ansible/roles/cluster-replacement/templates/haproxy.cfg.j2 b/roles/cluster-replacement/templates/haproxy.cfg.j2
similarity index 100%
rename from ansible/roles/cluster-replacement/templates/haproxy.cfg.j2
rename to roles/cluster-replacement/templates/haproxy.cfg.j2
diff --git a/ansible/roles/cluster-replacement/vars/main.yml b/roles/cluster-replacement/vars/main.yml
similarity index 67%
rename from ansible/roles/cluster-replacement/vars/main.yml
rename to roles/cluster-replacement/vars/main.yml
index b3a4f44..f2502b6 100644
--- a/ansible/roles/cluster-replacement/vars/main.yml
+++ b/roles/cluster-replacement/vars/main.yml
@@ -1,3 +1,2 @@
---
-cluster_replacement: True
haproxy_docker_overlay_network: 'haproxy-public'
diff --git a/ansible/roles/common/defaults/main.yaml b/roles/common/defaults/main.yaml
similarity index 79%
rename from ansible/roles/common/defaults/main.yaml
rename to roles/common/defaults/main.yaml
index 8279808..ab0a403 100644
--- a/ansible/roles/common/defaults/main.yaml
+++ b/roles/common/defaults/main.yaml
@@ -1,4 +1,5 @@
---
target_path: /tmp/conductor_stack
conductor_network: conductor-network
+conductor_db: postgres
init_db: True
diff --git a/ansible/roles/common/tasks/main.yaml b/roles/common/tasks/main.yaml
similarity index 100%
rename from ansible/roles/common/tasks/main.yaml
rename to roles/common/tasks/main.yaml
diff --git a/ansible/roles/conductor/defaults/main.yaml b/roles/conductor/defaults/main.yaml
similarity index 100%
rename from ansible/roles/conductor/defaults/main.yaml
rename to roles/conductor/defaults/main.yaml
diff --git a/ansible/roles/conductor/tasks/main.yaml b/roles/conductor/tasks/main.yaml
similarity index 74%
rename from ansible/roles/conductor/tasks/main.yaml
rename to roles/conductor/tasks/main.yaml
index e4ec547..8ca93da 100644
--- a/ansible/roles/conductor/tasks/main.yaml
+++ b/roles/conductor/tasks/main.yaml
@@ -1,12 +1,4 @@
---
-#- name: Display switches
-# debug:
-# msg: "Cluster replacement {{ cluster_replacement }}"
-
-#- name: Display switches
-# debug:
-# msg: "Negative condition {{(cluster_replacement is not defined or not cluster_replacement) or (cluster_check is not defined or not cluster_check)}}"
-
- name: Generate conductor-swarm
template:
src: templates/conductor-swarm.yaml.j2
diff --git a/ansible/roles/conductor/templates/conductor-db-init-mysql.sql.j2 b/roles/conductor/templates/conductor-db-init-mysql.sql.j2
similarity index 100%
rename from ansible/roles/conductor/templates/conductor-db-init-mysql.sql.j2
rename to roles/conductor/templates/conductor-db-init-mysql.sql.j2
diff --git a/ansible/roles/conductor/templates/conductor-db-init-postgres.sql.j2 b/roles/conductor/templates/conductor-db-init-postgres.sql.j2
similarity index 100%
rename from ansible/roles/conductor/templates/conductor-db-init-postgres.sql.j2
rename to roles/conductor/templates/conductor-db-init-postgres.sql.j2
diff --git a/ansible/roles/conductor/templates/conductor-swarm-config.properties.j2 b/roles/conductor/templates/conductor-swarm-config.properties.j2
similarity index 100%
rename from ansible/roles/conductor/templates/conductor-swarm-config.properties.j2
rename to roles/conductor/templates/conductor-swarm-config.properties.j2
diff --git a/ansible/roles/conductor/templates/conductor-swarm.yaml.j2 b/roles/conductor/templates/conductor-swarm.yaml.j2
similarity index 100%
rename from ansible/roles/conductor/templates/conductor-swarm.yaml.j2
rename to roles/conductor/templates/conductor-swarm.yaml.j2
diff --git a/ansible/roles/elasticsearch/defaults/main.yaml b/roles/elasticsearch/defaults/main.yaml
similarity index 100%
rename from ansible/roles/elasticsearch/defaults/main.yaml
rename to roles/elasticsearch/defaults/main.yaml
diff --git a/ansible/roles/elasticsearch/tasks/main.yaml b/roles/elasticsearch/tasks/main.yaml
similarity index 100%
rename from ansible/roles/elasticsearch/tasks/main.yaml
rename to roles/elasticsearch/tasks/main.yaml
diff --git a/ansible/roles/elasticsearch/templates/elasticsearch-swarm.yaml.j2 b/roles/elasticsearch/templates/elasticsearch-swarm.yaml.j2
similarity index 100%
rename from ansible/roles/elasticsearch/templates/elasticsearch-swarm.yaml.j2
rename to roles/elasticsearch/templates/elasticsearch-swarm.yaml.j2
diff --git a/ansible/roles/mysql/defaults/main.yml b/roles/mysql/defaults/main.yml
similarity index 92%
rename from ansible/roles/mysql/defaults/main.yml
rename to roles/mysql/defaults/main.yml
index 546f130..840f38a 100644
--- a/ansible/roles/mysql/defaults/main.yml
+++ b/roles/mysql/defaults/main.yml
@@ -3,7 +3,7 @@ use_jdbc: True
mysql_image_name: 'mariadb'
mysql_service_name: 'mysqldb'
mysql_replicas: 1
-conductor_db: mysql
+conductor_db: mysql
jdbc_user: conductor
jdbc_pass: password
jdbc_db: conductor
diff --git a/ansible/roles/mysql/tasks/main.yaml b/roles/mysql/tasks/main.yaml
similarity index 100%
rename from ansible/roles/mysql/tasks/main.yaml
rename to roles/mysql/tasks/main.yaml
diff --git a/ansible/roles/mysql/templates/mysql-swarm.yaml.j2 b/roles/mysql/templates/mysql-swarm.yaml.j2
similarity index 100%
rename from ansible/roles/mysql/templates/mysql-swarm.yaml.j2
rename to roles/mysql/templates/mysql-swarm.yaml.j2
diff --git a/ansible/roles/postgres/defaults/main.yml b/roles/postgres/defaults/main.yml
similarity index 89%
rename from ansible/roles/postgres/defaults/main.yml
rename to roles/postgres/defaults/main.yml
index b215103..4f54949 100644
--- a/ansible/roles/postgres/defaults/main.yml
+++ b/roles/postgres/defaults/main.yml
@@ -2,7 +2,7 @@
use_jdbc: True
postgres_service_name: 'postgresdb'
postgres_replicas: 1
-conductor_db: postgres
+conductor_db: postgres
jdbc_user: conductor
jdbc_pass: password
jdbc_db: conductor
diff --git a/ansible/roles/postgres/tasks/main.yaml b/roles/postgres/tasks/main.yaml
similarity index 100%
rename from ansible/roles/postgres/tasks/main.yaml
rename to roles/postgres/tasks/main.yaml
diff --git a/ansible/roles/postgres/templates/postgres-swarm.yaml.j2 b/roles/postgres/templates/postgres-swarm.yaml.j2
similarity index 100%
rename from ansible/roles/postgres/templates/postgres-swarm.yaml.j2
rename to roles/postgres/templates/postgres-swarm.yaml.j2
diff --git a/ansible/roles/workers/defaults/main.yaml b/roles/workers/defaults/main.yaml
similarity index 59%
rename from ansible/roles/workers/defaults/main.yaml
rename to roles/workers/defaults/main.yaml
index e4bc40f..f201b8b 100644
--- a/ansible/roles/workers/defaults/main.yaml
+++ b/roles/workers/defaults/main.yaml
@@ -1,5 +1,6 @@
---
conductor_workers_server: http://conductor-dev.int.d4science.net/api
-conductor_workers: [ { service: 'base', image: 'nubisware/nubisware-conductor-worker-py-base', replicas: 2, threads: 1, pollrate: 1 }, { service: 'provisioning', image: 'nubisware/nubisware-conductor-worker-py-provisioning', replicas: 2, threads: 1, pollrate: 1 } ]
+conductor_workers: [ { service: 'base', image: 'nubisware/nubisware-conductor-worker-py-base', replicas: 2, threads: 1, pollrate: 1 }]
+#{service: 'provisioning', image: 'nubisware/nubisware-conductor-worker-py-provisioning', replicas: 2, threads: 1, pollrate: 1 }
diff --git a/ansible/roles/workers/tasks/main.yaml b/roles/workers/tasks/main.yaml
similarity index 100%
rename from ansible/roles/workers/tasks/main.yaml
rename to roles/workers/tasks/main.yaml
diff --git a/ansible/roles/workers/templates/conductor-workers-swarm.yaml.j2 b/roles/workers/templates/conductor-workers-swarm.yaml.j2
similarity index 100%
rename from ansible/roles/workers/templates/conductor-workers-swarm.yaml.j2
rename to roles/workers/templates/conductor-workers-swarm.yaml.j2
diff --git a/ansible/roles/workers/templates/config.cfg.j2 b/roles/workers/templates/config.cfg.j2
similarity index 100%
rename from ansible/roles/workers/templates/config.cfg.j2
rename to roles/workers/templates/config.cfg.j2
diff --git a/site.yaml b/site.yaml
new file mode 100644
index 0000000..6311245
--- /dev/null
+++ b/site.yaml
@@ -0,0 +1,55 @@
+---
+- hosts: localhost
+ roles:
+ - common
+ - role: cluster-replacement
+ when:
+ - cluster_replacement is defined and cluster_replacement|bool
+ - role: postgres
+ when: db is not defined or db == 'postgres'
+ - role: mysql
+ when: db is defined and db == 'mysql'
+ - elasticsearch
+ - conductor
+ tasks:
+ - name: Start {{ db|default('postgres', true) }} and es
+ docker_stack:
+ name: conductor
+ state: present
+ compose:
+ - "{{ target_path }}/{{ db|default('postgres', true) }}-swarm.yaml"
+ - "{{ target_path }}/elasticsearch-swarm.yaml"
+ when: dry is not defined or dry|bool
+
+ - name: Waiting for databases
+ pause:
+ seconds: 10
+ when: dry is not defined or dry|bool
+
+ - name: Start conductor
+ docker_stack:
+ name: conductor
+ state: present
+ compose:
+ - "{{ target_path }}/conductor-swarm.yaml"
+ when: dry is not defined or dry|bool
+
+ - name: Start haproxy
+ docker_stack:
+ name: conductor
+ state: present
+ compose:
+ - "{{ target_path }}/haproxy-swarm.yaml"
+ when:
+ - dry is not defined or dry|bool
+ - cluster_replacement is defined
+ - cluster_replacement|bool
+
+ - name: Start workers
+ include_role:
+ name: workers
+ when:
+ - dry is not defined or dry|bool
+ - workers is defined
+ - workers|bool
+
diff --git a/stack/conductor-swarm-config.properties b/stack/conductor-swarm-config.properties
deleted file mode 100644
index 751daee..0000000
--- a/stack/conductor-swarm-config.properties
+++ /dev/null
@@ -1,58 +0,0 @@
-# Servers.
-conductor.jetty.server.enabled=true
-conductor.grpc.server.enabled=false
-
-# Database persistence model. Possible values are memory, redis, and dynomite.
-# If ommitted, the persistence used is memory
-#
-# memory : The data is stored in memory and lost when the server dies. Useful for testing or demo
-# redis : non-Dynomite based redis instance
-# dynomite : Dynomite cluster. Use this for HA configuration.
-
-db=dynomite
-
-# Dynomite Cluster details.
-# format is host:port:rack separated by semicolon
-workflow.dynomite.cluster.hosts=dynomite1:8102:us-east-1b;dynomite2:8102:us-east-1b;dynomite3:8102:us-east-2b;dynomite4:8102:us-east-2b
-
-# Dynomite cluster name
-workflow.dynomite.cluster.name=dyno1
-
-# Namespace for the keys stored in Dynomite/Redis
-workflow.namespace.prefix=conductor
-
-# Namespace prefix for the dyno queues
-workflow.namespace.queue.prefix=conductor_queues
-
-# No. of threads allocated to dyno-queues (optional)
-queues.dynomite.threads=10
-
-# Non-quorum port used to connect to local redis. Used by dyno-queues.
-# When using redis directly, set this to the same port as redis server
-# For Dynomite, this is 22122 by default or the local redis-server port used by Dynomite.
-queues.dynomite.nonQuorum.port=22122
-
-# Elastic search instance type. Possible values are memory and external.
-# If not specified, the instance type will be embedded in memory
-#
-# memory: The instance is created in memory and lost when the server dies. Useful for development and testing.
-# external: Elastic search instance runs outside of the server. Data is persisted and does not get lost when
-# the server dies. Useful for more stable environments like staging or production.
-workflow.elasticsearch.instanceType=external
-
-# Transport address to elasticsearch
-workflow.elasticsearch.url=elasticsearch:9300
-
-# Name of the elasticsearch cluster
-workflow.elasticsearch.index.name=conductor
-
-# Additional modules (optional)
-# conductor.additional.modules=class_extending_com.google.inject.AbstractModule
-
-# Additional modules for metrics collection (optional)
-# conductor.additional.modules=com.netflix.conductor.contribs.metrics.MetricsRegistryModule,com.netflix.conductor.contribs.metrics.LoggingMetricsModule
-# com.netflix.conductor.contribs.metrics.LoggingMetricsModule.reportPeriodSeconds=15
-
-# Load sample kitchen sink workflow
-loadSample=false
-
diff --git a/stack/conductor-swarm.yaml b/stack/conductor-swarm.yaml
deleted file mode 100644
index 0fffd21..0000000
--- a/stack/conductor-swarm.yaml
+++ /dev/null
@@ -1,59 +0,0 @@
-version: '3.6'
-
-services:
- conductor-server:
- environment:
- - CONFIG_PROP=conductor-swarm-config.properties
- image: nubisware/conductor-server
- networks:
- - conductor-network
- ports:
- - "8080:8080"
- depends_on:
- - elasticsearch
- - dynomite1
- - dynomite2
- deploy:
- mode: replicated
- replicas: 2
- #endpoint_mode: dnsrr
- placement:
- constraints: [node.role == worker]
- restart_policy:
- condition: on-failure
- delay: 5s
- max_attempts: 3
- window: 120s
- configs:
- - source: swarm-config
- target: /app/config/conductor-swarm-config.properties
-
- logging:
- driver: "journald"
-
- conductor-ui:
- environment:
- - WF_SERVER=http://conductor-server:8080/api/
- image: nubisware/conductor-ui
- networks:
- - conductor-network
- ports:
- - "5000:5000"
- deploy:
- mode: replicated
- replicas: 2
- #endpoint_mode: dnsrr
- placement:
- constraints: [node.role == worker]
- restart_policy:
- condition: on-failure
- delay: 5s
- max_attempts: 3
- window: 120s
-
-networks:
- conductor-network:
-
-configs:
- swarm-config:
- file: ./conductor-swarm-config.properties
diff --git a/stack/dynomite-swarm.yaml b/stack/dynomite-swarm.yaml
deleted file mode 100644
index 55f57e3..0000000
--- a/stack/dynomite-swarm.yaml
+++ /dev/null
@@ -1,101 +0,0 @@
-version: '3.6'
-
-services:
- dynomite1:
- environment:
- - DYNO_NODE=dynomite1:8101:rack-1:d4s:0
- image: nubisware/autodynomite:latest
- networks:
- conductor-network:
- logging:
- driver: "journald"
- deploy:
- mode: replicated
- replicas: 1
- endpoint_mode: dnsrr
- placement:
- constraints: [node.role == worker]
- restart_policy:
- condition: on-failure
- delay: 5s
- max_attempts: 3
- window: 120s
- configs:
- - source: seeds.list
- target: /dynomite/seeds.list
-
- dynomite2:
- environment:
- - DYNO_NODE=dynomite2:8101:rack-1:d4s:2147483647
- image: nubisware/autodynomite:latest
- networks:
- conductor-network:
- logging:
- driver: "journald"
- deploy:
- mode: replicated
- replicas: 1
- endpoint_mode: dnsrr
- placement:
- constraints: [node.role == worker]
- restart_policy:
- condition: on-failure
- delay: 5s
- max_attempts: 3
- window: 120s
- configs:
- - source: seeds.list
- target: /dynomite/seeds.list
-
- dynomite3:
- environment:
- - DYNO_NODE=dynomite3:8101:rack-3:d4s:0
- image: nubisware/autodynomite:latest
- networks:
- conductor-network:
- logging:
- driver: "journald"
- deploy:
- mode: replicated
- replicas: 1
- endpoint_mode: dnsrr
- placement:
- constraints: [node.role == worker]
- restart_policy:
- condition: on-failure
- delay: 5s
- max_attempts: 3
- window: 120s
- configs:
- - source: seeds.list
- target: /dynomite/seeds.list
-
- dynomite4:
- environment:
- - DYNO_NODE=dynomite4:8101:rack-2:d4s:2147483647
- image: nubisware/autodynomite:latest
- networks:
- conductor-network:
- logging:
- driver: "journald"
- deploy:
- mode: replicated
- replicas: 1
- endpoint_mode: dnsrr
- placement:
- constraints: [node.role == worker]
- restart_policy:
- condition: on-failure
- delay: 5s
- max_attempts: 3
- window: 120s
- configs:
- - source: seeds.list
- target: /dynomite/seeds.list
-
-networks:
- conductor-network:
-
-configs:
- seeds.list:
- file: ./seeds.list
diff --git a/stack/elasticsearch-swarm.yaml b/stack/elasticsearch-swarm.yaml
deleted file mode 100644
index 9492b37..0000000
--- a/stack/elasticsearch-swarm.yaml
+++ /dev/null
@@ -1,31 +0,0 @@
-version: '3.6'
-
-services:
-
- elasticsearch:
- image: docker.elastic.co/elasticsearch/elasticsearch:5.6.8
- environment:
- - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- - transport.host=0.0.0.0
- - discovery.type=single-node
- - xpack.security.enabled=false
- networks:
- conductor-network:
- aliases:
- - es
- logging:
- driver: "journald"
- deploy:
- mode: replicated
- replicas: 1
- #endpoint_mode: dnsrr
- placement:
- constraints: [node.role == worker]
- restart_policy:
- condition: on-failure
- delay: 5s
- max_attempts: 3
- window: 120s
-
-networks:
- conductor-network:
diff --git a/stack/seeds.list b/stack/seeds.list
deleted file mode 100644
index 4fd7077..0000000
--- a/stack/seeds.list
+++ /dev/null
@@ -1,4 +0,0 @@
-dynomite1:8101:rack-1:d4s:0
-dynomite2:8101:rack-1:d4s:2147483647
-dynomite3:8101:rack-2:d4s:0
-dynomite4:8101:rack-2:d4s:2147483647