complete refactoring and removal of dynomite

This commit is contained in:
Marco Lettere 2020-11-19 18:04:30 +01:00
parent 1a75ca07ed
commit b874434f2d
61 changed files with 87 additions and 2135 deletions

View File

@ -2,6 +2,12 @@ This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.htm
# Changelog for "conductor-setup" # Changelog for "conductor-setup"
## [v0.2.0]
- Factored out workflows
- Added relational persistence
- Removed Dynomite
- Added workers
## [v0.1.0-SNAPSHOT] ## [v0.1.0-SNAPSHOT]
- First release. It provides Conductor HA with 2 instances. (#19689).<br> - First release. It provides Conductor HA with 2 instances. (#19689).<br>

View File

@ -1,6 +1,6 @@
# Conductor Setup # Conductor Setup
**Conductor Setup** is composed by a Docker image script that should be used to build the `autodynomite` image, and 3 different Docker Compose Swarm YAML files to deploy the Conductor in HA. **Conductor Setup** is composed of a set of ansible roles and a playbook named site.yaml useful for deploying a docker swarm running Conductor microservice orchestrator by [Netflix OSS](https://netflix.github.io/conductor/).
## Structure of the project ## Structure of the project
@ -9,27 +9,37 @@ The Docker Compose Swarm files are present in the `stack` folder.
## Built With ## Built With
* [Ansible](https://www.ansible.com)
* [Docker](https://www.docker.com) * [Docker](https://www.docker.com)
## Documentation ## Documentation
The provided Docker stack files provide the following configuration: The provided Docker stack files provide the following configuration:
- 4 Dynomites nodes (2 shards with 1 replication each one, handled by Dynomite directly) based on `autodynomite` image that is backed by Redis DB in the same container
- 2 Conductor Server nodes with 2 replicas handled by Swarm - 2 Conductor Server nodes with 2 replicas handled by Swarm
- 2 Conductor UI nodes with 2 replicas handled by Swarm - 2 Conductor UI nodes with 2 replicas handled by Swarm
- 1 Elasticsearch node - 1 Elasticsearch node
- 1 Database node that can be postgres (default), mysql or mariadb
- 2 Optional replicated instances of PyExec worker running the tasks Http, Eval and Shell
- 1 Optional cluster-replacement service that sets up a networking environment (including on HAProxy LB) similar to the one available in production. By default it's disabled.
Build the Docker `autodynomite` image with the `Dockerfile` present in the dynomite folder and launch the three Docker Compose Swarm files is sequence: The default configuration is run with the command: `ansible-playbook site.yaml`
Files for swarms and configurations will be generated inside a temporary folder named /tmp/conductor_stack on the local machine.
In order to change destination folder use the switch: `-e target_path=anotherdir`
If you only want to review the generated files run the command `ansible-playbook site.yaml -e dry=true`
In order to switch between postgres and mysql specify the db on the proper variable: `-e db=mysql`
In order to skip worker creation specify the noworker varaible: `-e noworker=true`
In order to enable the cluster replacement use the switch: `-e cluster_replacement=true`
If you run the stack in production behind a load balenced setup ensure the variable cluster_check is true: `ansible-playbook site.yaml -e cluster_check=true`
- dynomite-swarm.yaml Other setting can be fine tuned by checking the variables in the proper roles which are:
- elasticsearch-swarm.yaml
- conductor-swarm.yaml
The command to be executed should looks like: `docker stack deploy -c dynomite-swarm.yaml -c elasticsearch-swarm.yaml -c conductor-swarm.yaml [your stack name]` - *common*: defaults and common tasks
- *conductor*: defaults, templates and tasks for generating swarm files for replicated conductor-server and ui.
If you plan to deploy **more than 4 nodes** for dynomite persistence you should modify the `dynomite-swarm.yaml` and the `seeds.list` files as per your needs. - *elasticsearch*: defaults, templates and task for starting in the swarm a single instance of elasticsearch
The `conductor-swarm-config.properties` should be left unmodified. - *mysql*: defaults, template and tasks for starting in the swarm a single instance of mysql/mariadb
- *postgres*: defaults, templates and tasks for starting in the swarm a single instance of postgres
- *workers*: defaults and task for starting in the swarm a replicated instance of the workers for executing HTTP, Shell, Eval operations.
## Change log ## Change log

View File

@ -1,22 +0,0 @@
---
target_path: "/tmp/lr62workflows"
conductor_server: "http://conductor-dev.int.d4science.net/api"
conductor_workflowdef_endpoint: "{{ conductor_server }}/metadata/workflow"
conductor_taskdef_endpoint: "{{ conductor_server }}/metadata/taskdefs"
workflows:
- create-user-add-to-vre
- group_deleted
- user-group_created
- user-group-role_created
- group_created
- invitation-accepted
- user-group_deleted
- user-group-role_deleted
- delete-user-account
#keycloak_realm: d4science
keycloak_host: "https://accounts.dev.d4science.org/auth"
keycloak: "{{ keycloak_host }}/realms"
keycloak_admin: "{{ keycloak_host }}/admin/realms"
keycloak_auth: "c93501bd-abeb-4228-bc28-afac38877338"
liferay: "https://next.d4science.org/api/jsonws"
liferay_auth: "bm90aWZpY2F0aW9uc0BkNHNjaWVuY2Uub3JnOmdjdWJlcmFuZG9tMzIx"

View File

@ -1,31 +0,0 @@
---
- name: Generate taskdefs
template:
src: "templates/taskdefs.json.j2"
dest: "{{ target_path }}/taskdefs.json"
- name: Upload task definitions
uri:
url: "{{ conductor_taskdef_endpoint }}"
method: POST
src: "{{ target_path }}/taskdefs.json"
body_format: json
status_code: 204
follow_redirects: yes
- name: Generate workflows
template:
src: "templates/{{ item }}.json.j2"
dest: "{{ target_path }}/{{ item }}.json"
loop: "{{ workflows }}"
- name: Upload workflows
uri:
url: "{{ conductor_workflowdef_endpoint }}"
method: POST
src: "{{ target_path }}/{{ item }}.json"
body_format: json
follow_redirects: yes
status_code: [200, 204, 409]
loop:
"{{ workflows }}"

View File

@ -1,167 +0,0 @@
{
"ownerApp" : "Orchestrator",
"name" : "create-user-add-to-vre",
"createBy" : "Marco Lettere",
"description": "Batch create a user with a membership in a specific group",
"version" : 1,
"ownerEmail" : "m.lettere@gmail.com",
"inputParameters" : ["user", "first-name", "last-name", "email", "password", "group"],
"tasks" : [
{
"name": "LAMBDA_TASK",
"taskReferenceName": "init",
"type": "LAMBDA",
"inputParameters": {
"keycloak": "{{ keycloak }}",
"keycloak_admin" : "{{ keycloak_admin }}",
"group" : "${workflow.input.group}",
"scriptExpression": "var path = $.group.split('%2F').slice(1); return { 'tree' : Java.to(path, 'java.lang.Object[]'), 'name' : path.slice(path.length-1)[0]}"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "authorize",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak}/protocol/openid-connect/token",
"method" : "POST",
"headers" : {
"Accept" : "application/json"
},
"body" : {
"client_id" : "orchestrator",
"client_secret" : "{{ keycloak_auth }}",
"grant_type" : "client_credentials"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "create_user",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/users",
"expect" : 201,
"method" : "POST",
"body" : {
"username": "${workflow.input.user}",
"firstName": "${workflow.input.first-name}",
"lastName": "${workflow.input.last-name}",
"email": "${workflow.input.email}",
"credentials": [
{
"temporary": true,
"type": "password",
"value": "${workflow.input.password}"
}
],
"requiredActions": ["UPDATE_PASSWORD"],
"emailVerified": true,
"enabled": true
},
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Content-Type" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "lookup_user",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/users?username=${workflow.input.user}",
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "lookup_client",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/clients",
"params" : { "clientId" : "${workflow.input.group}"},
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "get_client_roles",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/clients/${lookup_client.output.body[0].id}/roles",
"expect" : [200, 404],
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name" : "check_role_existance",
"taskReferenceName" : "check_role_existance",
"type" : "DECISION",
"inputParameters" :{
"previous_outcome" : "${get_client_roles.output.status}"
},
"caseValueParam" : "previous_outcome",
"decisionCases" : {
"200" : [
{
"name": "LAMBDA_TASK",
"taskReferenceName": "select_role",
"type": "LAMBDA",
"inputParameters": {
"role": "${workflow.input.role}",
"roles" : "${get_client_roles.output.body}",
"scriptExpression": "for(var i=0; i < $.roles.length;i++){if($.roles[i]['name'] == 'Member') return Java.to([$.roles[i]], 'java.lang.Object[]')}"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "look_up_groups",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/groups?search=${init.output.result.name}",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name": "LAMBDA_TASK",
"taskReferenceName": "extract_group",
"type": "LAMBDA",
"inputParameters": {
"tree" : "${init.output.result.tree}",
"groups" : "${look_up_groups.output.body}",
"scriptExpression": "function selectByPath(groups, path, level) { for (var i=0; i < groups.length; i++) {if (groups[i].name === path[level]) {if (level === path.length - 1) return groups[i];return selectByPath(groups[i].subGroups, path, level+1)}} return null; } return { 'group' : selectByPath($.groups, $.tree, 0)}"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "assign_user_to_group",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/users/${lookup_user.output.body[0].id}/groups/${extract_group.output.result.group.id}",
"method" : "PUT",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}"
}
}
}
]
}
}
]
}

View File

@ -1,181 +0,0 @@
{
"ownerApp" : "Orchestrator",
"name" : "delete-user-account",
"createBy" : "Marco Lettere",
"description": "Handle Admin events from Keycloak",
"version" : 1,
"ownerEmail" : "m.lettere@gmail.com",
"inputParameters" : [ "userid" ],
"tasks" : [
{
"name": "LAMBDA_TASK",
"taskReferenceName": "init",
"type": "LAMBDA",
"inputParameters": {
"keycloak": "{{ keycloak }}/${workflow.input.realm}",
"keycloak_admin" : "{{ keycloak_admin }}/${workflow.input.realm}",
"liferay": "{{ liferay }}",
"liferay_auth": "{{ liferay_auth }}",
"keycloak_userid" : "${workflow.input.userid}",
"scriptExpression": "1 == 1"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "authorize",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak}/protocol/openid-connect/token",
"method" : "POST",
"headers" : {
"Accept" : "application/json"
},
"body" : {
"client_id" : "orchestrator",
"client_secret" : "{{ keycloak_auth }}",
"grant_type" : "client_credentials"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "lookup_user",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/users/${init.input.keycloak_userid}",
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name" : "fork_join",
"taskReferenceName" : "global_delete_user",
"type" : "FORK_JOIN",
"forkTasks" : [
[
{
"name" : "pyrest",
"taskReferenceName" : "lookup_lr_company",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.liferay}/company/get-company-by-web-id",
"method" : "GET",
"params" : { "webId" : "liferay.com"},
"headers" : {
"Authorization" : "Basic ${init.input.liferay_auth}",
"Accept" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "lookup_lr_user_by_screenname",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.liferay}/user/get-user-by-screen-name",
"method" : "GET",
"params" : {
"companyId" : "${lookup_lr_company.output.body.companyId}",
"screenName" : "${lookup_user.output.body.username}"
},
"headers" : {
"Authorization" : "Basic ${init.input.liferay_auth}",
"Accept" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "lookup_lr_user_groups",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.liferay}/group/get-user-sites-groups",
"method" : "GET",
"params" : {
"classNames" : "[\"com.liferay.portal.model.Group\"]",
"userId" : "${lookup_lr_user_by_screenname.output.body.userId}",
"max" : "-1"
},
"headers" : {
"Authorization" : "Basic ${init.input.liferay_auth}",
"Accept" : "application/json"
}
}
},
{
"name": "LAMBDA_TASK",
"taskReferenceName": "build_delete_group_tasks",
"type": "LAMBDA",
"inputParameters": {
"groups" : "${lookup_lr_user_groups.output.body.*.groupId}",
"userId" : "${lookup_lr_user_by_screenname.output.body.userId}",
"scriptExpression": "inputs = {}; tasks = []; for(var i=0;i<$.groups.length;i++){tasks.push({'name': 'pyrest','type' : 'SIMPLE','taskReferenceName' : 'del-' + i});inputs['del-'+i] = {'url' : '${init.input.liferay}/user/unset-group-users?userIds=' + $.userId + '&groupId=' + $.groups[i],'method' : 'POST','headers' : {'Authorization' : 'Basic ' + '${init.input.liferay_auth}', 'Accept' : 'application/json'}}}; return { 'tasks' : Java.to(tasks, 'java.util.Map[]'), 'inputs' : inputs};"
}
},
{
"name" : "fork_dynamic",
"type" : "FORK_JOIN_DYNAMIC",
"taskReferenceName" : "parallel_delete_group",
"inputParameters" : {
"tasks" : "${build_delete_group_tasks.output.result.tasks}",
"inputs" : "${build_delete_group_tasks.output.result.inputs}"
},
"dynamicForkTasksParam": "tasks",
"dynamicForkTasksInputParamName": "inputs"
},
{
"name" : "join",
"type" : "JOIN",
"taskReferenceName" : "join_parallel_group_deletion"
},
{
"name" : "pyrest",
"taskReferenceName" : "delete_lr_user",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.liferay}/user/delete-user",
"method" : "POST",
"params" : {
"userId" : "${lookup_lr_user_by_screenname.output.body.userId}"
},
"headers" : {
"Authorization" : "Basic ${init.input.liferay_auth}",
"Accept" : "application/json"
}
}
},
{
"name": "LAMBDA_TASK",
"taskReferenceName": "lr_final_task",
"type": "LAMBDA",
"inputParameters" : {
"scriptExpression" : "1 == 1"
}
}
]
]
},
{
"name" : "join",
"type" : "JOIN",
"taskReferenceName" : "global_delete_user_join",
"joinOn": [ "lr_final_task"]
},
{
"name" : "pyrest",
"taskReferenceName" : "delete_keycloak_user",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/users/${init.input.keycloak_userid}",
"method" : "DELETE",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
}
]
}

View File

@ -1,343 +0,0 @@
{
"ownerApp" : "Orchestrator",
"name" : "group_created",
"createBy" : "Marco Lettere",
"description": "Handle workflow related to Portal event group_created",
"version" : 1,
"ownerEmail" : "marco.lettere@nubisware.com",
"inputParameters" : ["user", "group"],
"tasks" : [
{
"name": "LAMBDA_TASK",
"taskReferenceName": "init",
"type": "LAMBDA",
"inputParameters": {
"keycloak": "{{ keycloak }}",
"keycloak_admin" : "{{ keycloak_admin }}",
"clientId" : "${workflow.input.group}",
"scriptExpression": "var tree = $.clientId.split('%2F'); return { 'tree' : tree, 'child': tree[tree.length-1], 'append' : tree.slice(0,-1).join('/'), 'name' : tree.join('/')}"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "authorize",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak}/protocol/openid-connect/token",
"method" : "POST",
"headers" : {
"Accept" : "application/json"
},
"body" : {
"client_id" : "orchestrator",
"client_secret" : "{{ keycloak_auth }}",
"grant_type" : "client_credentials"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "lookup_user",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/users?username=${workflow.input.user}",
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "create_client",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/clients",
"body" : {
"clientId": "${init.input.clientId}",
"name": "${init.output.result.name}",
"description": "Client representation for ${init.output.result.name} context",
"rootUrl": "http://localhost${init.output.result.name}",
"enabled": true,
"serviceAccountsEnabled": true,
"standardFlowEnabled": true,
"authorizationServicesEnabled": true,
"publicClient": false,
"protocol": "openid-connect"
},
"method" : "POST",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Content-Type" : "application/json"
}
}
},
{
"name" : "fork_join",
"taskReferenceName" : "fork_role_creation",
"type" : "FORK_JOIN",
"forkTasks" : [
[{
"name" : "pyrest",
"taskReferenceName" : "create_role_member",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${create_client.output.headers.location}/roles",
"body" : {
"clientRole" : true, "name" : "Member", "description" : "Simple membership for ${init.output.result.name}"
},
"method" : "POST",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Content-Type" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "get_back_role_member",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${create_role_member.output.headers.location}",
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "create_kc_group",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/groups",
"body" : {
"name" : "${init.output.result.child}"
},
"method" : "POST",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Content-Type" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "list_kc_groups",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/groups",
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name": "LAMBDA_TASK",
"taskReferenceName": "prepare",
"type": "LAMBDA",
"inputParameters": {
"append" : "${init.output.result.append}",
"location" : "${create_kc_group.output.headers.location}",
"client_location" : "${create_client.output.headers.location}",
"groups" : "${list_kc_groups.output.body}",
"scriptExpression": "var newid=$.location.split('/').pop(); var client_id = $.client_location.split('/').pop(); function recurse(inp){for(var i=0;i<inp.length;i++){if(inp[i]['path'] === $.append) return inp[i]; else{var subr = recurse(inp[i].subGroups); if(subr != null) return subr;}} return null}; return {'group' : $.append == '' ? '' : recurse($.groups), 'newid' : newid, 'client_id' : client_id}"
}
},
{
"name": "decide_task",
"taskReferenceName": "decide1",
"inputParameters": {
"groupid": "${prepare.output.result.group}"
},
"type": "DECISION",
"caseValueParam": "groupid",
"decisionCases": {
"": [
{
"name": "LAMBDA_TASK",
"taskReferenceName": "dummy",
"type": "LAMBDA",
"inputParameters": {
"scriptExpression": "1"
}
}
]
},
"defaultCase": [
{
"name" : "pyrest",
"taskReferenceName" : "move_new_kc_group_to_parent",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/groups/${prepare.output.result.group.id}/children",
"method" : "POST",
"body" : {
"id" : "${prepare.output.result.newid}"
},
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json",
"Content-Type" : "application/json"
}
}
}
]
},
{
"name" : "pyrest",
"taskReferenceName" : "assign_client_member_role_to_kc_group",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/groups/${prepare.output.result.newid}/role-mappings/clients/${prepare.output.result.client_id}",
"method" : "POST",
"body" : ["${get_back_role_member.output.body}"],
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json",
"Content-Type" : "application/json"
}
}
}
],
[{
"name" : "pyrest",
"taskReferenceName" : "create_role_accountingmanager",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${create_client.output.headers.location}/roles",
"body" : {
"clientRole" : true, "name" : "Accounting-Manager", "description" : "Accounting-Manager for ${init.output.result.name}"
},
"method" : "POST",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Content-Type" : "application/json"
}
}
}],
[{
"name" : "pyrest",
"taskReferenceName" : "create_role_catalogueadmin",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${create_client.output.headers.location}/roles",
"body" : {
"clientRole" : true, "name" : "Catalogue-Admin", "description" : "Catalogue-Admin for ${init.output.result.name}"
},
"method" : "POST",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Content-Type" : "application/json"
}
}
}],
[{
"name" : "pyrest",
"taskReferenceName" : "create_role_catalogueeditor",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${create_client.output.headers.location}/roles",
"body" : {
"clientRole" : true, "name" : "Catalogue-Editor", "description" : "Catalogue-Editor for ${init.output.result.name}"
},
"method" : "POST",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Content-Type" : "application/json"
}
}
}],
[{
"name" : "pyrest",
"taskReferenceName" : "create_role_datamanager",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${create_client.output.headers.location}/roles",
"body" : {
"clientRole" : true, "name" : "Data-Manager", "description" : "Data-Manager for ${init.output.result.name}"
},
"method" : "POST",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Content-Type" : "application/json"
}
}
}],
[{
"name" : "pyrest",
"taskReferenceName" : "create_role_dataminermanager",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${create_client.output.headers.location}/roles",
"body" : {
"clientRole" : true, "name" : "Dataminer-Manager", "description" : "Dataminer-Manager for ${init.output.result.name}"
},
"method" : "POST",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Content-Type" : "application/json"
}
}
}],
[{
"name" : "pyrest",
"taskReferenceName" : "create_role_voadmin",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${create_client.output.headers.location}/roles",
"body" : {
"clientRole" : true, "name" : "VO-Admin", "description" : "VO-Admin for ${init.output.result.name}"
},
"method" : "POST",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Content-Type" : "application/json"
}
}
}],
[{
"name" : "pyrest",
"taskReferenceName" : "create_role_vredesigner",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${create_client.output.headers.location}/roles",
"body" : {
"clientRole" : true, "name" : "VRE-Designer", "description" : "VRE-Designer for ${init.output.result.name}"
},
"method" : "POST",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Content-Type" : "application/json"
}
}
}],
[{
"name" : "pyrest",
"taskReferenceName" : "create_role_vremanager",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${create_client.output.headers.location}/roles",
"body" : {
"clientRole" : true, "name" : "VRE-Manager", "description" : "VRE-Manager for ${init.output.result.name}"
},
"method" : "POST",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Content-Type" : "application/json"
}
}
}]
]
},
{
"name" : "join",
"taskReferenceName" : "join_role_creation",
"type" : "JOIN"
}
]
}

View File

@ -1,100 +0,0 @@
{
"ownerApp" : "Orchestrator",
"name" : "group_deleted",
"createBy" : "Marco Lettere",
"description": "Handle workflow related to Portal event group_created",
"version" : 1,
"ownerEmail" : "m.lettere@gmail.com",
"inputParameters" : ["user", "group"],
"tasks" : [
{
"name": "LAMBDA_TASK",
"taskReferenceName": "init",
"type": "LAMBDA",
"inputParameters": {
"keycloak": "{{ keycloak }}",
"keycloak_admin" : "{{ keycloak_admin }}",
"group" : "${workflow.input.group}",
"scriptExpression" : "return $.group.split('%2F').join('/')"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "authorize",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak}/protocol/openid-connect/token",
"method" : "POST",
"headers" : {
"Accept" : "application/json"
},
"body" : {
"client_id" : "orchestrator",
"client_secret" : "{{ keycloak_auth }}",
"grant_type" : "client_credentials"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "lookup_client",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/clients",
"params" : { "clientId" : "${workflow.input.group}"},
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "delete_client",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/clients/${lookup_client.output.body[0].id}",
"method" : "DELETE",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "list_kc_groups",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/groups",
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name": "LAMBDA_TASK",
"taskReferenceName": "find_group_by_path",
"type": "LAMBDA",
"inputParameters": {
"path" : "${init.output.result}",
"groups" : "${list_kc_groups.output.body}",
"scriptExpression": "function recurse(inp){for(var i=0;i<inp.length;i++){if(inp[i]['path'] === $.path) return inp[i]; else{var subr = recurse(inp[i].subGroups); if(subr != null) return subr;}} return null}; return recurse($.groups)"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "delete_group",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/groups/${find_group_by_path.output.result.id}",
"method" : "DELETE",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}"
}
}
}
]
}

View File

@ -1,68 +0,0 @@
{
"ownerApp" : "Orchestrator",
"name" : "invitation-accepted",
"createBy" : "Mauro Mugnaini",
"description": "Handle workflow related to Portal event invitation-accepted",
"version" : 1,
"ownerEmail" : "mauro.mugnaini@nubisware.com",
"inputParameters" : ["user", "first-name", "last-name", "email", "password"],
"tasks" : [
{
"name": "LAMBDA_TASK",
"taskReferenceName": "init",
"type": "LAMBDA",
"inputParameters": {
"keycloak": "{{ keycloak }}",
"keycloak_admin" : "{{ keycloak_admin }}",
"scriptExpression": "1"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "authorize",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak}/protocol/openid-connect/token",
"method" : "POST",
"headers" : {
"Accept" : "application/json"
},
"body" : {
"client_id" : "orchestrator",
"client_secret" : "{{ keycloak_auth }}",
"grant_type" : "client_credentials"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "create_user",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/users",
"expect" : 201,
"method" : "POST",
"body" : {
"username": "${workflow.input.user}",
"firstName": "${workflow.input.first-name}",
"lastName": "${workflow.input.last-name}",
"email": "${workflow.input.email}",
"credentials": [
{
"temporary": true,
"type": "password",
"value": "${workflow.input.password}"
}
],
"requiredActions": ["UPDATE_PASSWORD"],
"emailVerified": true,
"enabled": true
},
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Content-Type" : "application/json"
}
}
}
]
}

View File

@ -1,46 +0,0 @@
[
{
"name" : "jsr223",
"description" : "Execute JSR223 scripts",
"inputKeys" : ["code", "scriptdir", "script","engine"],
"outputKeys" : ["result"],
"ownerEmail" : "m.lettere@gmail.com"
},
{
"name" : "pyrest",
"description" : "Execute an HTTP request with pyrest worker",
"inputKeys" : ["url", "body", "contentType", "method", "accept", "headers", "connectionTimeout", "readTimeout"],
"outputKeys" : ["body", "status", "reason", "headers"],
"ownerEmail" : "m.lettere@gmail.com"
},
{
"name" : "pyeval",
"description" : "Execute arbitrary python code",
"inputKeys" : ["code"],
"outputKeys" : ["result"],
"ownerEmail" : "m.lettere@gmail.com"
},
{
"name" : "pyshell",
"description" : "Execute an Shell commands on target machine. Commands are in the form of an array of objects named commands of defined as { 'line': 'ls -l', 'expect' : 0, 'withshell' : False}.",
"inputKeys" : ["commands"],
"outputKeys" : ["results"],
"ownerEmail" : "m.lettere@gmail.com"
},
{
"name" : "pyansible",
"retryCount" : 0,
"description" : "Execute ansible playbook",
"inputKeys" : ["playbook", "hosts", "connection", "verbosity", "extra_vars", "gather_facts"],
"outputKeys" : ["ok", "failed", "unreachable"],
"ownerEmail" : "m.lettere@gmail.com"
},
{
"name" : "pypacker",
"retryCount" : 0,
"description" : "Executes packer.io command line for build and validate. It has been isolated in order to be able to start the worker only where OS dependencies are matched.",
"inputKeys" : ["command", "template"],
"outputKeys" : ["results"],
"ownerEmail" : "m.lettere@gmail.com"
}
]

View File

@ -1,138 +0,0 @@
{
"ownerApp" : "Orchestrator",
"name" : "user-group-role_created",
"createBy" : "Marco Lettere",
"description": "Handle workflow related to Portal event user-group-role_created",
"version" : 1,
"ownerEmail" : "m.lettere@gmail.com",
"inputParameters" : ["role", "user", "group"],
"tasks" : [
{
"name": "LAMBDA_TASK",
"taskReferenceName": "init",
"type": "LAMBDA",
"inputParameters": {
"keycloak": "{{ keycloak }}",
"keycloak_admin" : "{{ keycloak_admin }}",
"scriptExpression": "1"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "authorize",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak}/protocol/openid-connect/token",
"method" : "POST",
"headers" : {
"Accept" : "application/json"
},
"body" : {
"client_id" : "orchestrator",
"client_secret" : "{{ keycloak_auth }}",
"grant_type" : "client_credentials"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "lookup_user",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/users?username=${workflow.input.user}",
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name": "check_user_existance",
"taskReferenceName": "check_user_existance",
"inputParameters": {
"user": "${lookup_user.output.body[0]}"
},
"type": "DECISION",
"caseExpression": "($.user == null ? 'true' : 'false')",
"decisionCases": {
"true": [
{
"name" : "terminate",
"taskReferenceName" : "terminate_when_no_user",
"type" : "TERMINATE",
"inputParameters" : {
"terminationStatus" : "COMPLETED"
}
}
]
}
},
{
"name" : "pyrest",
"taskReferenceName" : "lookup_client",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/clients",
"params" : { "clientId" : "${workflow.input.group}"},
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "get_client_roles",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/clients/${lookup_client.output.body[0].id}/roles",
"method" : "GET",
"expect" : [200,404],
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name": "check_task",
"taskReferenceName": "check",
"inputParameters": {
"prev_status": "${get_client_roles.output.status}"
},
"type": "DECISION",
"caseValueParam": "prev_status",
"decisionCases": {
"200": [
{
"name": "LAMBDA_TASK",
"taskReferenceName": "select_role",
"type": "LAMBDA",
"inputParameters": {
"role": "${workflow.input.role}",
"roles" : "${get_client_roles.output.body}",
"scriptExpression": "for(var i=0; i < $.roles.length;i++){if($.roles[i]['name'] == $.role) return Java.to([$.roles[i]], 'java.lang.Object[]')}"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "assign_role_to_user",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/users/${lookup_user.output.body[0].id}/role-mappings/clients/${lookup_client.output.body[0].id}",
"expect" : [204, 404],
"method" : "POST",
"body" : "${select_role.output.result}",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Content-Type" : "application/json"
}
}
}
]
}
}
]
}

View File

@ -1,124 +0,0 @@
{
"ownerApp" : "Orchestrator",
"name" : "user-group-role_deleted",
"createBy" : "Marco Lettere",
"description": "Handle workflow related to Portal event user-group-role_deleted",
"version" : 1,
"ownerEmail" : "m.lettere@gmail.com",
"inputParameters" : ["role", "user", "group"],
"tasks" : [
{
"name": "LAMBDA_TASK",
"taskReferenceName": "init",
"type": "LAMBDA",
"inputParameters": {
"keycloak": "{{ keycloak }}",
"keycloak_admin" : "{{ keycloak_admin }}",
"scriptExpression": "1"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "authorize",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak}/protocol/openid-connect/token",
"method" : "POST",
"headers" : {
"Accept" : "application/json"
},
"body" : {
"client_id" : "orchestrator",
"client_secret" : "{{ keycloak_auth }}",
"grant_type" : "client_credentials"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "lookup_user",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/users?username=${workflow.input.user}",
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name": "check_user_existance",
"taskReferenceName": "check_user_existance",
"inputParameters": {
"user": "${lookup_user.output.body[0]}"
},
"type": "DECISION",
"caseExpression": "($.user == null ? 'true' : 'false')",
"decisionCases": {
"true": [
{
"name" : "terminate",
"taskReferenceName" : "terminate_when_no_user",
"type" : "TERMINATE",
"inputParameters" : {
"terminationStatus" : "COMPLETED"
}
}
]
}
},
{
"name" : "pyrest",
"taskReferenceName" : "lookup_client",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/clients",
"params" : { "clientId" : "${workflow.input.group}"},
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "get_client_roles",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/clients/${lookup_client.output.body[0].id}/roles",
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name": "LAMBDA_TASK",
"taskReferenceName": "select_role",
"type": "LAMBDA",
"inputParameters": {
"role": "${workflow.input.role}",
"roles" : "${get_client_roles.output.body}",
"scriptExpression": "for(var i=0; i < $.roles.length;i++){if($.roles[i]['name'] == $.role) return Java.to([$.roles[i]], 'java.lang.Object[]')}"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "remove_role_from_user",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/users/${lookup_user.output.body[0].id}/role-mappings/clients/${lookup_client.output.body[0].id}",
"expect" : 204,
"method" : "DELETE",
"body" : "${select_role.output.result}",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Content-Type" : "application/json"
}
}
}
]
}

View File

@ -1,137 +0,0 @@
{
"ownerApp" : "Orchestrator",
"name" : "user-group_created",
"createBy" : "Marco Lettere",
"description": "Handle workflow related to Portal event user-group_created",
"version" : 1,
"ownerEmail" : "m.lettere@gmail.com",
"inputParameters" : ["user", "group"],
"tasks" : [
{
"name": "LAMBDA_TASK",
"taskReferenceName": "init",
"type": "LAMBDA",
"inputParameters": {
"keycloak": "{{ keycloak }}",
"keycloak_admin" : "{{ keycloak_admin }}",
"group" : "${workflow.input.group}",
"scriptExpression": "var path = $.group.split('%2F').slice(1); return { 'tree' : Java.to(path, 'java.lang.Object[]'), 'name' : path.slice(path.length-1)[0]}"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "authorize",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak}/protocol/openid-connect/token",
"method" : "POST",
"headers" : {
"Accept" : "application/json"
},
"body" : {
"client_id" : "orchestrator",
"client_secret" : "{{ keycloak_auth }}",
"grant_type" : "client_credentials"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "lookup_user",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/users?username=${workflow.input.user}",
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "lookup_client",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/clients",
"params" : { "clientId" : "${workflow.input.group}"},
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "get_client_roles",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/clients/${lookup_client.output.body[0].id}/roles",
"expect" : [200, 404],
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name" : "check_role_existance",
"taskReferenceName" : "check_role_existance",
"type" : "DECISION",
"inputParameters" :{
"previous_outcome" : "${get_client_roles.output.status}"
},
"caseValueParam" : "previous_outcome",
"decisionCases" : {
"200" : [
{
"name": "LAMBDA_TASK",
"taskReferenceName": "select_role",
"type": "LAMBDA",
"inputParameters": {
"role": "${workflow.input.role}",
"roles" : "${get_client_roles.output.body}",
"scriptExpression": "for(var i=0; i < $.roles.length;i++){if($.roles[i]['name'] == 'Member') return Java.to([$.roles[i]], 'java.lang.Object[]')}"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "look_up_groups",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/groups?search=${init.output.result.name}",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name": "LAMBDA_TASK",
"taskReferenceName": "extract_group",
"type": "LAMBDA",
"inputParameters": {
"tree" : "${init.output.result.tree}",
"groups" : "${look_up_groups.output.body}",
"scriptExpression": "function selectByPath(groups, path, level) { for (var i=0; i < groups.length; i++) {if (groups[i].name === path[level]) {if (level === path.length - 1) return groups[i];return selectByPath(groups[i].subGroups, path, level+1)}} return null; } return { 'group' : selectByPath($.groups, $.tree, 0)}"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "assign_user_to_group",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/users/${lookup_user.output.body[0].id}/groups/${extract_group.output.result.group.id}",
"method" : "PUT",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}"
}
}
}
]
}
}
]
}

View File

@ -1,149 +0,0 @@
{
"ownerApp" : "Orchestrator",
"name" : "user-group_deleted",
"createBy" : "Marco Lettere",
"description": "Handle workflow related to Portal event user-group_deleted",
"version" : 1,
"ownerEmail" : "m.lettere@gmail.com",
"inputParameters" : ["role", "user", "group"],
"tasks" : [
{
"name": "LAMBDA_TASK",
"taskReferenceName": "init",
"type": "LAMBDA",
"inputParameters": {
"keycloak": "{{ keycloak }}",
"keycloak_admin" : "{{ keycloak_admin }}",
"group" : "${workflow.input.group}",
"scriptExpression": "var path = $.group.split('%2F').slice(1); return { 'tree' : Java.to(path, 'java.lang.Object[]'), 'name' : path.slice(path.length-1)[0]}"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "authorize",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak}/protocol/openid-connect/token",
"method" : "POST",
"headers" : {
"Accept" : "application/json"
},
"body" : {
"client_id" : "orchestrator",
"client_secret" : "{{ keycloak_auth }}",
"grant_type" : "client_credentials"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "lookup_user",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/users?username=${workflow.input.user}",
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name": "check_user_existance",
"taskReferenceName": "check_user_existance",
"inputParameters": {
"user": "${lookup_user.output.body[0]}"
},
"type": "DECISION",
"caseExpression": "($.user == null ? 'true' : 'false')",
"decisionCases": {
"true": [
{
"name" : "terminate",
"taskReferenceName" : "terminate_when_no_user",
"type" : "TERMINATE",
"inputParameters" : {
"terminationStatus" : "COMPLETED"
}
}
]
}
},
{
"name" : "pyrest",
"taskReferenceName" : "lookup_client",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/clients",
"params" : { "clientId" : "${workflow.input.group}"},
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "get_client_roles",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/clients/${lookup_client.output.body[0].id}/roles",
"method" : "GET",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "remove_all_roles_from_user",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/users/${lookup_user.output.body[0].id}/role-mappings/clients/${lookup_client.output.body[0].id}",
"expect" : 204,
"method" : "DELETE",
"body" : "${get_client_roles.body}",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Content-Type" : "application/json"
}
}
},
{
"name" : "pyrest",
"taskReferenceName" : "look_up_groups",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/groups?search=${init.output.result.name}",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}",
"Accept" : "application/json"
}
}
},
{
"name": "LAMBDA_TASK",
"taskReferenceName": "extract_group",
"type": "LAMBDA",
"inputParameters": {
"tree" : "${init.output.result.tree}",
"groups" : "${look_up_groups.output.body}",
"scriptExpression": "function selectByPath(groups, path, level) { for (var i=0; i < groups.length; i++) {if (groups[i].name === path[level]) {if (level === path.length - 1) return groups[i];return selectByPath(groups[i].subGroups, path, level+1)}} return null; } return { 'group' : selectByPath($.groups, $.tree, 0)}"
}
},
{
"name" : "pyrest",
"taskReferenceName" : "assign_user_to_group",
"type" : "SIMPLE",
"inputParameters" : {
"url" : "${init.input.keycloak_admin}/users/${lookup_user.output.body[0].id}/groups/${extract_group.output.result.group.id}",
"method" : "DELETE",
"headers" : {
"Authorization" : "Bearer ${authorize.output.body.access_token}"
}
}
}
]
}

View File

@ -1,3 +0,0 @@
---
dynomite_shards: 3
dynomite_replicas: 3

View File

@ -1,12 +0,0 @@
---
- name: Generate seedlist
template:
src: templates/seeds.list.j2
dest: "{{ target_path }}/seeds.list"
- name: Generate dynomite-swarm
vars:
seeds: "{{ lookup('file', '{{ target_path}}/seeds.list').splitlines() }}"
template:
src: templates/dynomite-swarm.yaml.j2
dest: "{{ target_path }}/dynomite-swarm.yaml"

View File

@ -1,25 +0,0 @@
{% for seed in seeds %}
{{ seed.split(':')[0] }}:
environment:
- DYNO_NODE={{ seed }}
image: nubisware/autodynomite:latest
networks:
{{ conductor_network }}:
logging:
driver: "journald"
deploy:
mode: replicated
replicas: 1
#endpoint_mode: dnsrr
placement:
constraints: [node.role == worker]
restart_policy:
condition: on-failure
delay: 5s
#max_attempts: 3
#window: 120s
configs:
- source: seeds.list
target: /dynomite/seeds.list
{%endfor%}

View File

@ -1,11 +0,0 @@
version: '3.6'
services:
{% include 'dynomite-service.yaml.j2' %}
networks:
{{ conductor_network }}:
configs:
seeds.list:
file: seeds.list

View File

@ -1,7 +0,0 @@
{% set datacenter = "us-east-1" %}
{% for replica in range(1,dynomite_replicas+1) %}
{% set replicaloop = loop %}
{% for shard in range(1,dynomite_shards+1) %}
dynomite{{ (replicaloop.index - 1) * dynomite_shards + shard}}:8101:{{ datacenter }}{{ "abcdefghijklmnopqrstuvwxyz"[replicaloop.index - 1] }}:{{ datacenter }}:{{ (shard - 1) * (4294967295 // dynomite_shards) }}
{% endfor %}
{% endfor %}

View File

@ -1,29 +0,0 @@
---
- hosts: localhost
roles:
- common
- elasticsearch
- dynomite
tasks:
- name: Start dynomite and es
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/dynomite-swarm.yaml"
- "{{ target_path }}/elasticsearch-swarm.yaml"
- hosts: localhost
roles:
- common
- cluster-replacement
- conductor
tasks:
- name: Start conductor
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/conductor-swarm.yaml"
- "{{ target_path }}/haproxy-swarm.yaml"

View File

@ -1,27 +0,0 @@
---
- hosts: localhost
roles:
- common
- dynomite
- elasticsearch
tasks:
- name: Start dynomite and es
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/dynomite-swarm.yaml"
- "{{ target_path }}/elasticsearch-swarm.yaml"
- hosts: localhost
roles:
- common
- conductor
tasks:
- name: Start conductor
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/conductor-swarm.yaml"

View File

@ -1,34 +0,0 @@
---
- hosts: localhost
roles:
- common
- cluster-replacement
- mysql
- elasticsearch
- conductor
tasks:
- name: "Start {{ mysql_image_name }} and es"
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/mysql-swarm.yaml"
- "{{ target_path }}/elasticsearch-swarm.yaml"
- name: "Waiting for {{ mysql_image_name }} and es DBs"
pause:
seconds: 60
- name: Start conductor
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/conductor-swarm.yaml"
- name: Start haproxy
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/haproxy-swarm.yaml"

View File

@ -1,25 +0,0 @@
---
- hosts: localhost
roles:
- common
- mysql
- elasticsearch
- conductor
tasks:
- name: "Start {{ mysql_image_name }} and es"
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/mysql-swarm.yaml"
- "{{ target_path }}/elasticsearch-swarm.yaml"
- name: "Waiting for {{ mysql_image_name }} and es DBs"
pause:
seconds: 60
- name: Start conductor
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/conductor-swarm.yaml"

View File

@ -1,34 +0,0 @@
---
- hosts: localhost
roles:
- common
- cluster-replacement
- postgres
- elasticsearch
- conductor
tasks:
- name: Start postgres and es
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/postgres-swarm.yaml"
- "{{ target_path }}/elasticsearch-swarm.yaml"
- name: Waiting for postgres and es DBs
pause:
seconds: 10
- name: Start conductor
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/conductor-swarm.yaml"
- name: Start haproxy
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/haproxy-swarm.yaml"

View File

@ -1,25 +0,0 @@
---
- hosts: localhost
roles:
- common
- postgres
- elasticsearch
- conductor
tasks:
- name: Start postgres and es
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/postgres-swarm.yaml"
- "{{ target_path }}/elasticsearch-swarm.yaml"
- name: Waiting for postgres and es DBs
pause:
seconds: 10
- name: Start conductor
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/conductor-swarm.yaml"

View File

@ -1,23 +0,0 @@
---
- hosts: localhost
roles:
- common
- workers
#tasks:
#- name: Start postgres and es
#docker_stack:
# name: conductor
# state: present
# compose:
# - "{{ target_path }}/postgres-swarm.yaml"
# - "{{ target_path }}/elasticsearch-swarm.yaml"
#- name: Waiting for postgres and ES DBs
#pause:
# seconds: 10
#- name: Start conductor
#docker_stack:
# name: conductor
# state: present
# compose:
# - "{{ target_path }}/conductor-swarm.yaml"

View File

@ -1,4 +0,0 @@
---
- hosts: localhost
roles:
- ansible-role-lr62-workflows

View File

@ -1,63 +0,0 @@
#
# Dockerfile derived from the good work of 'Iannis Papapanagiotou' (https://github.com/ipapapa/DynomiteDocker/blob/master/Dockerfile)
# It uses the startup.sh script to generate the dynomite conf YAML based on seeds.conf file provided via 'docker swarm configs' directive
# as '/dynomite/seeds.conf' and the DYNO_NODE env. variable that identifies the node itself (that must exactly match one of the lines in the seeds file)
#
# Set the base image to Ubuntu
FROM ubuntu:14.04
# File Author / Maintainer
MAINTAINER Marco Lettere, Mauro Mugnaini - Nubisware S.r.l.
# Update the repository sources list and Install package Build Essential
RUN apt-get update && apt-get install -y \
autoconf \
build-essential \
dh-autoreconf \
git \
libssl-dev \
libtool \
python-software-properties\
redis-server \
tcl8.5
# Clone the Dynomite Git
RUN git clone https://github.com/Netflix/dynomite.git
RUN echo 'Git repo has been cloned in your Docker VM'
WORKDIR dynomite/
# Autoreconf
RUN autoreconf -fvi \
&& ./configure --enable-debug=log \
&& CFLAGS="-ggdb3 -O0" ./configure --enable-debug=full \
&& make \
&& make install
##################### INSTALLATION ENDS #####################
# Expose the peer port
#RUN echo 'Exposing peer port 8101'
EXPOSE 8101
# Expose the underlying Redis port
#RUN echo 'Exposing Redis port 22122'
EXPOSE 22122
# Expose the stats/admin port
#RUN echo 'Exposing stats/admin port 22222'
EXPOSE 22222
# Default port to acccess Dynomite
#RUN echo 'Exposing client port for Dynomite 8102'
EXPOSE 8102
# Setting overcommit for Redis to be able to do BGSAVE/BGREWRITEAOF
RUN sysctl vm.overcommit_memory=1
# Set the entry-point to be the startup script
ENTRYPOINT ["/dynomite/startup.sh"]
#RUN echo 'copy startup file...'
COPY scripts/startup.sh ./startup.sh
#RUN echo '... done'

View File

@ -1,31 +0,0 @@
#!/bin/bash
# GENERATE a dynomite yaml configuration based on DYNO_NODE env. variable and 'seeds.list' file.
NODE=${DYNO_NODE:-"dynomite:8101:rack-1:datacenter-1:0"}
IFS=':' read -r -a NODEandPORTandRACKandDATACENTERandTOKEN <<< "$NODE"
echo "I am $NODE"
{
printf 'dyn_o_mite:\n'
printf ' datacenter: %s\n' "${NODEandPORTandRACKandDATACENTERandTOKEN[3]}"
printf ' rack: %s\n' "${NODEandPORTandRACKandDATACENTERandTOKEN[2]}"
printf ' dyn_listen: 0.0.0.0:8101\n'
printf ' listen: 0.0.0.0:8102\n'
printf ' dyn_seed_provider: simple_provider\n'
printf ' dyn_seeds:\n'
while IFS= read -r SEED_LINE; do
if [ "$SEED_LINE" != "$NODE" ]; then
printf ' - %s\n' "$SEED_LINE";
fi
done < seeds.list
printf ' tokens: %s\n' "${NODEandPORTandRACKandDATACENTERandTOKEN[4]}"
printf ' servers:\n'
printf ' - 127.0.0.1:22122:1\n'
} > /dynomite/auto_dynomite.yml
#Start redis server on 22122
redis-server --port 22122 &
src/dynomite --conf-file=/dynomite/auto_dynomite.yml #-v11

View File

@ -1,3 +1,2 @@
--- ---
cluster_replacement: True
haproxy_docker_overlay_network: 'haproxy-public' haproxy_docker_overlay_network: 'haproxy-public'

View File

@ -1,4 +1,5 @@
--- ---
target_path: /tmp/conductor_stack target_path: /tmp/conductor_stack
conductor_network: conductor-network conductor_network: conductor-network
conductor_db: postgres
init_db: True init_db: True

View File

@ -1,12 +1,4 @@
--- ---
#- name: Display switches
# debug:
# msg: "Cluster replacement {{ cluster_replacement }}"
#- name: Display switches
# debug:
# msg: "Negative condition {{(cluster_replacement is not defined or not cluster_replacement) or (cluster_check is not defined or not cluster_check)}}"
- name: Generate conductor-swarm - name: Generate conductor-swarm
template: template:
src: templates/conductor-swarm.yaml.j2 src: templates/conductor-swarm.yaml.j2

View File

@ -1,5 +1,6 @@
--- ---
conductor_workers_server: http://conductor-dev.int.d4science.net/api conductor_workers_server: http://conductor-dev.int.d4science.net/api
conductor_workers: [ { service: 'base', image: 'nubisware/nubisware-conductor-worker-py-base', replicas: 2, threads: 1, pollrate: 1 }, { service: 'provisioning', image: 'nubisware/nubisware-conductor-worker-py-provisioning', replicas: 2, threads: 1, pollrate: 1 } ] conductor_workers: [ { service: 'base', image: 'nubisware/nubisware-conductor-worker-py-base', replicas: 2, threads: 1, pollrate: 1 }]
#{service: 'provisioning', image: 'nubisware/nubisware-conductor-worker-py-provisioning', replicas: 2, threads: 1, pollrate: 1 }

55
site.yaml Normal file
View File

@ -0,0 +1,55 @@
---
- hosts: localhost
roles:
- common
- role: cluster-replacement
when:
- cluster_replacement is defined and cluster_replacement|bool
- role: postgres
when: db is not defined or db == 'postgres'
- role: mysql
when: db is defined and db == 'mysql'
- elasticsearch
- conductor
tasks:
- name: Start {{ db|default('postgres', true) }} and es
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/{{ db|default('postgres', true) }}-swarm.yaml"
- "{{ target_path }}/elasticsearch-swarm.yaml"
when: dry is not defined or dry|bool
- name: Waiting for databases
pause:
seconds: 10
when: dry is not defined or dry|bool
- name: Start conductor
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/conductor-swarm.yaml"
when: dry is not defined or dry|bool
- name: Start haproxy
docker_stack:
name: conductor
state: present
compose:
- "{{ target_path }}/haproxy-swarm.yaml"
when:
- dry is not defined or dry|bool
- cluster_replacement is defined
- cluster_replacement|bool
- name: Start workers
include_role:
name: workers
when:
- dry is not defined or dry|bool
- workers is defined
- workers|bool

View File

@ -1,58 +0,0 @@
# Servers.
conductor.jetty.server.enabled=true
conductor.grpc.server.enabled=false
# Database persistence model. Possible values are memory, redis, and dynomite.
# If ommitted, the persistence used is memory
#
# memory : The data is stored in memory and lost when the server dies. Useful for testing or demo
# redis : non-Dynomite based redis instance
# dynomite : Dynomite cluster. Use this for HA configuration.
db=dynomite
# Dynomite Cluster details.
# format is host:port:rack separated by semicolon
workflow.dynomite.cluster.hosts=dynomite1:8102:us-east-1b;dynomite2:8102:us-east-1b;dynomite3:8102:us-east-2b;dynomite4:8102:us-east-2b
# Dynomite cluster name
workflow.dynomite.cluster.name=dyno1
# Namespace for the keys stored in Dynomite/Redis
workflow.namespace.prefix=conductor
# Namespace prefix for the dyno queues
workflow.namespace.queue.prefix=conductor_queues
# No. of threads allocated to dyno-queues (optional)
queues.dynomite.threads=10
# Non-quorum port used to connect to local redis. Used by dyno-queues.
# When using redis directly, set this to the same port as redis server
# For Dynomite, this is 22122 by default or the local redis-server port used by Dynomite.
queues.dynomite.nonQuorum.port=22122
# Elastic search instance type. Possible values are memory and external.
# If not specified, the instance type will be embedded in memory
#
# memory: The instance is created in memory and lost when the server dies. Useful for development and testing.
# external: Elastic search instance runs outside of the server. Data is persisted and does not get lost when
# the server dies. Useful for more stable environments like staging or production.
workflow.elasticsearch.instanceType=external
# Transport address to elasticsearch
workflow.elasticsearch.url=elasticsearch:9300
# Name of the elasticsearch cluster
workflow.elasticsearch.index.name=conductor
# Additional modules (optional)
# conductor.additional.modules=class_extending_com.google.inject.AbstractModule
# Additional modules for metrics collection (optional)
# conductor.additional.modules=com.netflix.conductor.contribs.metrics.MetricsRegistryModule,com.netflix.conductor.contribs.metrics.LoggingMetricsModule
# com.netflix.conductor.contribs.metrics.LoggingMetricsModule.reportPeriodSeconds=15
# Load sample kitchen sink workflow
loadSample=false

View File

@ -1,59 +0,0 @@
version: '3.6'
services:
conductor-server:
environment:
- CONFIG_PROP=conductor-swarm-config.properties
image: nubisware/conductor-server
networks:
- conductor-network
ports:
- "8080:8080"
depends_on:
- elasticsearch
- dynomite1
- dynomite2
deploy:
mode: replicated
replicas: 2
#endpoint_mode: dnsrr
placement:
constraints: [node.role == worker]
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
configs:
- source: swarm-config
target: /app/config/conductor-swarm-config.properties
logging:
driver: "journald"
conductor-ui:
environment:
- WF_SERVER=http://conductor-server:8080/api/
image: nubisware/conductor-ui
networks:
- conductor-network
ports:
- "5000:5000"
deploy:
mode: replicated
replicas: 2
#endpoint_mode: dnsrr
placement:
constraints: [node.role == worker]
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
networks:
conductor-network:
configs:
swarm-config:
file: ./conductor-swarm-config.properties

View File

@ -1,101 +0,0 @@
version: '3.6'
services:
dynomite1:
environment:
- DYNO_NODE=dynomite1:8101:rack-1:d4s:0
image: nubisware/autodynomite:latest
networks:
conductor-network:
logging:
driver: "journald"
deploy:
mode: replicated
replicas: 1
endpoint_mode: dnsrr
placement:
constraints: [node.role == worker]
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
configs:
- source: seeds.list
target: /dynomite/seeds.list
dynomite2:
environment:
- DYNO_NODE=dynomite2:8101:rack-1:d4s:2147483647
image: nubisware/autodynomite:latest
networks:
conductor-network:
logging:
driver: "journald"
deploy:
mode: replicated
replicas: 1
endpoint_mode: dnsrr
placement:
constraints: [node.role == worker]
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
configs:
- source: seeds.list
target: /dynomite/seeds.list
dynomite3:
environment:
- DYNO_NODE=dynomite3:8101:rack-3:d4s:0
image: nubisware/autodynomite:latest
networks:
conductor-network:
logging:
driver: "journald"
deploy:
mode: replicated
replicas: 1
endpoint_mode: dnsrr
placement:
constraints: [node.role == worker]
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
configs:
- source: seeds.list
target: /dynomite/seeds.list
dynomite4:
environment:
- DYNO_NODE=dynomite4:8101:rack-2:d4s:2147483647
image: nubisware/autodynomite:latest
networks:
conductor-network:
logging:
driver: "journald"
deploy:
mode: replicated
replicas: 1
endpoint_mode: dnsrr
placement:
constraints: [node.role == worker]
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
configs:
- source: seeds.list
target: /dynomite/seeds.list
networks:
conductor-network:
configs:
seeds.list:
file: ./seeds.list

View File

@ -1,31 +0,0 @@
version: '3.6'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.8
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- transport.host=0.0.0.0
- discovery.type=single-node
- xpack.security.enabled=false
networks:
conductor-network:
aliases:
- es
logging:
driver: "journald"
deploy:
mode: replicated
replicas: 1
#endpoint_mode: dnsrr
placement:
constraints: [node.role == worker]
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
networks:
conductor-network:

View File

@ -1,4 +0,0 @@
dynomite1:8101:rack-1:d4s:0
dynomite2:8101:rack-1:d4s:2147483647
dynomite3:8101:rack-2:d4s:0
dynomite4:8101:rack-2:d4s:2147483647