forked from gCubeSystem/conductor-setup
Compare commits
54 Commits
Author | SHA1 | Date |
---|---|---|
Marco Lettere | 112395d292 | |
Marco Lettere | e4585299d4 | |
Marco Lettere | b4477987b6 | |
Marco Lettere | afd8bd777c | |
Marco Lettere | 6ef1939bc4 | |
Marco Lettere | 14a2201085 | |
Marco Lettere | e01bffd1dc | |
Marco Lettere | 1cc1f1bb8c | |
Marco Lettere | 571a988be9 | |
Marco Lettere | 66a86dfe1a | |
Marco Lettere | 63806e6a6b | |
Marco Lettere | 1253174c74 | |
Marco Lettere | 11716f0d4d | |
Marco Lettere | 7c7535f94f | |
Marco Lettere | bfd86a8697 | |
Marco Lettere | 7948081d04 | |
Marco Lettere | 94eb5bd2fb | |
Marco Lettere | 2c54e97aeb | |
Marco Lettere | eb40300249 | |
Marco Lettere | a8b8f41446 | |
Marco Lettere | f3ec4f6327 | |
Marco Lettere | 3224c53ae5 | |
Marco Lettere | 139043faa2 | |
Marco Lettere | fafd89a278 | |
Marco Lettere | c4bb342b3f | |
Marco Lettere | c1db229a68 | |
Marco Lettere | 9cc76a61d5 | |
dcore94 | 981b8e1ac7 | |
dcore94 | 33499eb123 | |
dcore94 | e69fc35258 | |
dcore94 | b76b34c624 | |
dcore94 | 2f6d6e28ee | |
dcore94 | eeb843341a | |
dcore94 | d9467bf520 | |
dcore94 | 676cac24ec | |
dcore94 | 288482d5b6 | |
Mauro Mugnaini | 2d4585d086 | |
Mauro Mugnaini | ab66713941 | |
dcore94 | e12f87fd85 | |
dcore94 | bf1bf82c0f | |
dcore94 | ca0b62bcfe | |
dcore94 | c69b192c41 | |
dcore94 | 492c11ce61 | |
dcore94 | 2ec568e0a6 | |
dcore94 | d185681fef | |
dcore94 | 5a324e3265 | |
dcore94 | f434e0883e | |
dcore94 | b2b321a7de | |
dcore94 | dafb96637f | |
dcore94 | 48655dbbe3 | |
dcore94 | 414a38631c | |
dcore94 | 1887adf73b | |
dcore94 | af94edbda4 | |
Andrea Dell'Amico | 854f682bd3 |
71
README.md
71
README.md
|
@ -1,50 +1,55 @@
|
|||
# Conductor Setup
|
||||
|
||||
**Conductor Setup** is composed of a set of ansible roles and a playbook named site.yaml useful for deploying a docker swarm running Conductor microservice orchestrator by [Netflix OSS](https://netflix.github.io/conductor/).
|
||||
**Conductor Setup** is composed of a set of ansible roles and a playbook named site-*.yaml useful for deploying a docker swarm running Conductor microservice orchestrator by [Netflix OSS](https://netflix.github.io/conductor/).
|
||||
|
||||
Current setup is based on Conductor 3.0.4 version adapted by Nubisware S.r.l.
|
||||
It uses the docker images on dockerhub:
|
||||
|
||||
- nubisware/conductor-server:3.0.4
|
||||
- nubisware/conductor-ui-oauth2:3.0.4 (which has been improved with Oauth2 login in collaboration with Keycloak)
|
||||
|
||||
Besides the basic components Conductor itself (server and ui) and Elasticsearch 6.1.8, the repository can be configured to launch postgres or mysql persistence plus basic python based workers for running PyRest, PyMail, PyExec and PyShell in the same Swarm.
|
||||
In addition a nginx based PEP can be executed to protect the conductor REST API server.
|
||||
It is also possible to connect to an external postgres for stateful deployments.
|
||||
|
||||
## Structure of the project
|
||||
|
||||
The AutoDynomite Docker image script file is present in `dynomite` folder.
|
||||
The Docker Compose Swarm files are present in the `stack` folder.
|
||||
The folder roles contains the necessary roles for configuring the different configurations
|
||||
There are 4 file for deploying to local, nw-cluster or D4SCience dev, pre and prod environments.
|
||||
|
||||
To run a deployment
|
||||
|
||||
`ansible-playbook site-X.yaml`
|
||||
|
||||
whereas
|
||||
|
||||
`ansible-playbook site-X.yaml -e dry=true`
|
||||
|
||||
only generates the files for the stack without actually deploying it.
|
||||
|
||||
The folder *local-site* contains a ready version for quickly launching a conductor instance with no replications (except workers), no auth in the Conductor UI and no PEP.
|
||||
|
||||
`docker stack deploy -c elasticsearch-swarm.yaml -c postgres-swarm.yaml -c conductor-swarm.yaml -c conductor-workers-swarm.yaml -c pep-swarm.yaml conductor`
|
||||
|
||||
When you have ensured that Postgres and Elasticsearch are running, execute:
|
||||
|
||||
`docker stack deploy -c conductor-swarm.yaml conductor`
|
||||
|
||||
This will create a local stack accessible through permissive pep at port 80. Please add two mappings for localhost in your /etc/hosts
|
||||
|
||||
`127.0.1.1 conductor-server conductor-ui`
|
||||
|
||||
and point your browser to http://conductor-ui.
|
||||
|
||||
## Built With
|
||||
|
||||
* [Ansible](https://www.ansible.com)
|
||||
* [Docker](https://www.docker.com)
|
||||
|
||||
## Documentation
|
||||
|
||||
The provided Docker stack files provide the following configuration:
|
||||
|
||||
- 2 Conductor Server nodes with 2 replicas handled by Swarm
|
||||
- 2 Conductor UI nodes with 2 replicas handled by Swarm
|
||||
- 1 Elasticsearch node
|
||||
- 1 Database node that can be postgres (default), mysql or mariadb
|
||||
- 2 Optional replicated instances of PyExec worker running the tasks Http, Eval and Shell
|
||||
- 1 Optional cluster-replacement service that sets up a networking environment (including on HAProxy LB) similar to the one available in production. By default it's disabled.
|
||||
|
||||
The default configuration is run with the command: `ansible-playbook site.yaml`
|
||||
Files for swarms and configurations will be generated inside a temporary folder named /tmp/conductor_stack on the local machine.
|
||||
In order to change destination folder use the switch: `-e target_path=anotherdir`
|
||||
If you only want to review the generated files run the command `ansible-playbook site.yaml -e dry=true`
|
||||
In order to switch between postgres and mysql specify the db on the proper variable: `-e db=mysql`
|
||||
In order to skip worker creation specify the noworker varaible: `-e noworker=true`
|
||||
In order to enable the cluster replacement use the switch: `-e cluster_replacement=true`
|
||||
If you run the stack in production behind a load balenced setup ensure the variable cluster_check is true: `ansible-playbook site.yaml -e cluster_check=true`
|
||||
|
||||
Other setting can be fine tuned by checking the variables in the proper roles which are:
|
||||
|
||||
- *common*: defaults and common tasks
|
||||
- *conductor*: defaults, templates and tasks for generating swarm files for replicated conductor-server and ui.
|
||||
- *elasticsearch*: defaults, templates and task for starting in the swarm a single instance of elasticsearch
|
||||
- *mysql*: defaults, template and tasks for starting in the swarm a single instance of mysql/mariadb
|
||||
- *postgres*: defaults, templates and tasks for starting in the swarm a single instance of postgres
|
||||
- *workers*: defaults and task for starting in the swarm a replicated instance of the workers for executing HTTP, Shell, Eval operations.
|
||||
|
||||
## Examples
|
||||
|
||||
The following example runs as user username on the remote hosts listed in hosts a swarm with 2 replicas of conductor server and ui, 1 postgres, 1 elasticsearch, 2 replicas of simple PyExec, an HAProxy that acts as load balancer.
|
||||
`ansible-playbook -u username -i hosts site.yaml -e target_path=/tmp/conductor -e cluster_replacement=true`
|
||||
Checkout the files site-X.yaml as a reference for different configurations.
|
||||
|
||||
## Change log
|
||||
|
||||
|
|
|
@ -1,3 +0,0 @@
|
|||
---
|
||||
infrastructure: dev
|
||||
conductor_workers_server: http://conductor-dev.int.d4science.net/api
|
|
@ -1,3 +0,0 @@
|
|||
---
|
||||
infrastructure: pre
|
||||
conductor_workers_server: https://conductor.pre.d4science.org/api
|
|
@ -1,5 +1,5 @@
|
|||
[dev_infra:children]
|
||||
nw_cluster
|
||||
dev_cluster
|
||||
|
||||
[nw_cluster]
|
||||
nubis1.int.d4science.net
|
||||
[dev_cluster]
|
||||
docker-swarm1.int.d4science.net docker_swarm_manager_main_node=True
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
[nw_cluster_infra:children]
|
||||
nw_cluster
|
||||
|
||||
[nw_cluster]
|
||||
nubis1.int.d4science.net
|
|
@ -0,0 +1,5 @@
|
|||
[prod_infra:children]
|
||||
prod_cluster
|
||||
|
||||
[prod_cluster]
|
||||
docker-swarm1.int.d4science.net docker_swarm_manager_main_node=True
|
|
@ -0,0 +1,13 @@
|
|||
[common]
|
||||
loglevel = info
|
||||
#server =
|
||||
threads = 1
|
||||
pollrate = 1
|
||||
|
||||
[pymail]
|
||||
server = smtp-relay.d4science.org
|
||||
user = conductor_local
|
||||
password =
|
||||
protocol = starttls
|
||||
port = 587
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
# Servers.
|
||||
conductor.grpc-server.enabled=false
|
||||
|
||||
# Database persistence type.
|
||||
conductor.db.type=postgres
|
||||
conductor.postgres.jdbcUrl=jdbc:postgresql://postgresdb:5432/conductor
|
||||
conductor.postgres.jdbcUsername=conductor
|
||||
conductor.postgres.jdbcPassword=password
|
||||
|
||||
|
||||
# Hikari pool sizes are -1 by default and prevent startup
|
||||
conductor.postgres.connectionPoolMaxSize=10
|
||||
conductor.postgres.connectionPoolMinIdle=2
|
||||
|
||||
|
||||
# Elastic search instance indexing is enabled.
|
||||
conductor.indexing.enabled=true
|
||||
conductor.elasticsearch.url=http://elasticsearch:9200
|
||||
workflow.elasticsearch.instanceType=EXTERNAL
|
||||
workflow.elasticsearch.index.name=conductor
|
||||
|
||||
# Load sample kitchen sink workflow
|
||||
loadSample=false
|
|
@ -0,0 +1,44 @@
|
|||
version: '3.6'
|
||||
|
||||
|
||||
services:
|
||||
conductor-server-local:
|
||||
environment:
|
||||
- CONFIG_PROP=conductor-swarm-config.properties
|
||||
image: "nubisware/conductor-server:3.0.4"
|
||||
networks:
|
||||
- conductor-network
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
window: 120s
|
||||
configs:
|
||||
- source: swarm-config
|
||||
target: /app/config/conductor-swarm-config.properties
|
||||
|
||||
logging:
|
||||
driver: "journald"
|
||||
|
||||
conductor-ui-local:
|
||||
environment:
|
||||
- WF_SERVER=http://conductor-server-local:8080/api/
|
||||
image: "nubisware/conductor-ui-oauth2:3.0.4"
|
||||
networks:
|
||||
- conductor-network
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
window: 120s
|
||||
|
||||
networks:
|
||||
conductor-network:
|
||||
|
||||
configs:
|
||||
swarm-config:
|
||||
file: ./conductor-swarm-config.properties
|
|
@ -0,0 +1,30 @@
|
|||
version: '3.6'
|
||||
|
||||
services:
|
||||
base:
|
||||
environment:
|
||||
CONDUCTOR_SERVER: http://conductor-server-local:8080/api/
|
||||
CONDUCTOR_HEALTH: http://conductor-server-local:8080/health
|
||||
configs:
|
||||
- source: base-config
|
||||
target: /app/config.cfg
|
||||
image: 'nubisware/nubisware-conductor-worker-py-base'
|
||||
networks:
|
||||
- conductor-network
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 2
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
window: 120s
|
||||
logging:
|
||||
driver: "journald"
|
||||
|
||||
|
||||
networks:
|
||||
conductor-network:
|
||||
|
||||
configs:
|
||||
base-config:
|
||||
file: base-config.cfg
|
|
@ -0,0 +1,28 @@
|
|||
version: '3.6'
|
||||
|
||||
services:
|
||||
|
||||
elasticsearch:
|
||||
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.15
|
||||
environment:
|
||||
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
|
||||
- transport.host=0.0.0.0
|
||||
- discovery.type=single-node
|
||||
- xpack.security.enabled=false
|
||||
networks:
|
||||
conductor-network:
|
||||
aliases:
|
||||
- es
|
||||
logging:
|
||||
driver: "journald"
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
window: 120s
|
||||
|
||||
networks:
|
||||
conductor-network:
|
|
@ -0,0 +1,13 @@
|
|||
load_module modules/ngx_http_js_module.so;
|
||||
|
||||
worker_processes 1;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
include /etc/nginx/sites-enabled/*;
|
||||
}
|
|
@ -0,0 +1,41 @@
|
|||
upstream _conductor-server {
|
||||
ip_hash;
|
||||
server conductor-server-local:8080;
|
||||
}
|
||||
|
||||
upstream _conductor-ui {
|
||||
ip_hash;
|
||||
server conductor-ui-local:5000;
|
||||
}
|
||||
|
||||
server {
|
||||
|
||||
listen *:80;
|
||||
listen [::]:80;
|
||||
server_name conductor-server;
|
||||
|
||||
location / {
|
||||
proxy_set_header Host $host;
|
||||
proxy_pass http://_conductor-server;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
server {
|
||||
|
||||
listen *:80 default_server;
|
||||
listen [::]:80 default_server;
|
||||
server_name conductor-ui;
|
||||
|
||||
location / {
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Host $host;
|
||||
proxy_set_header X-Forwarded-Server $host;
|
||||
proxy_set_header X-Forwarded-Port $server_port;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_pass http://_conductor-ui;
|
||||
}
|
||||
|
||||
}
|
|
@ -0,0 +1,30 @@
|
|||
version: '3.6'
|
||||
|
||||
services:
|
||||
|
||||
pep:
|
||||
image: nginx:stable-alpine
|
||||
networks:
|
||||
- conductor-network
|
||||
ports:
|
||||
- "80:80"
|
||||
deploy:
|
||||
replicas: 1
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 10s
|
||||
window: 120s
|
||||
configs:
|
||||
- source: nginxconf
|
||||
target: /etc/nginx/templates/default.conf.template
|
||||
- source: nginxbaseconf
|
||||
target: /etc/nginx/nginx.conf
|
||||
|
||||
networks:
|
||||
conductor-network:
|
||||
|
||||
configs:
|
||||
nginxconf:
|
||||
file: ./nginx.default.conf
|
||||
nginxbaseconf:
|
||||
file: ./nginx.conf
|
|
@ -0,0 +1,16 @@
|
|||
version: '3.6'
|
||||
|
||||
services:
|
||||
|
||||
postgresdb:
|
||||
image: postgres
|
||||
environment:
|
||||
POSTGRES_USER: "conductor"
|
||||
POSTGRES_PASSWORD: "password"
|
||||
POSTGRES_DB: "conductor"
|
||||
networks:
|
||||
- conductor-network
|
||||
deploy:
|
||||
replicas: 1
|
||||
networks:
|
||||
conductor-network:
|
|
@ -1,56 +0,0 @@
|
|||
---
|
||||
haproxy_latest_release: True
|
||||
haproxy_version: 2.2
|
||||
haproxy_repo_key: 'http://haproxy.debian.net/bernat.debian.org.gpg'
|
||||
haproxy_debian_latest_repo: "deb http://haproxy.debian.net {{ ansible_lsb.codename }}-backports-{{ haproxy_version }} main"
|
||||
haproxy_ubuntu_latest_repo: "ppa:vbernat/haproxy-{{ haproxy_version }}"
|
||||
haproxy_pkg_state: present
|
||||
haproxy_enabled: True
|
||||
haproxy_loglevel: info
|
||||
haproxy_k_bind_non_local_ip: True
|
||||
haproxy_docker_container: False
|
||||
haproxy_docker_version: '{{ haproxy_version }}.4'
|
||||
haproxy_docker_image: 'haproxytech/haproxy-debian:{{ haproxy_version }}.4'
|
||||
haproxy_docker_compose_dir: /srv/haproxy_swarm
|
||||
haproxy_docker_restart_policy: 'on-failure'
|
||||
|
||||
haproxy_ha_with_keepalived: False
|
||||
haproxy_docker_swarm_networks:
|
||||
- '{{ docker_swarm_portainer_network }}'
|
||||
haproxy_docker_swarm_additional_networks: []
|
||||
|
||||
haproxy_docker_swarm_haproxy_constraints:
|
||||
- 'node.role == manager'
|
||||
haproxy_docker_swarm_additional_services: [{ acl_name: 'conductor-server', acl_rule: 'hdr_dom(host) -i conductor-dev.int.d4science.net', stack_name: 'conductor-{{ infrastructure }}', service_name: 'conductor-server', service_replica_num: '2', service_port: '8080', service_overlay_network: 'conductor-network', stick_sessions: False, stick_on_cookie: True, stick_cookie: 'JSESSIONID', stick_table: 'type ip size 2m expire 180m', balance_type: 'roundrobin', backend_options: '', http_check_enabled: True, http_check: 'meth GET uri /api/health ver HTTP/1.1 hdr Host localhost', http_check_expect: 'rstatus (2|3)[0-9][0-9]' }, { acl_name: 'conductor-ui', acl_rule: 'hdr_dom(host) -i conductorui-dev.int.d4science.net', stack_name: 'conductor-{{ infrastructure }}', service_name: 'conductor-ui', service_replica_num: '2', service_port: '5000', service_overlay_network: 'conductor-network', stick_sessions: False, stick_on_cookie: True, stick_cookie: 'JSESSIONID', stick_table: 'type ip size 2m expire 180m', balance_type: 'roundrobin', backend_options: '', http_check_enabled: True, http_check: 'meth GET uri / ver HTTP/1.1 hdr Host localhost', http_check_expect: 'rstatus (2|3)[0-9][0-9]' }]
|
||||
# - { acl_name: 'service', acl_rule: 'hdr_dom(host) -i service.example.com', stack_name: 'stack', service_name: 'service', service_replica_num: '1', service_port: '9999', service_overlay_network: 'service-network', stick_sessions: False, stick_on_cookie: True, stick_cookie: 'JSESSIONID', stick_table: 'type ip size 2m expire 180m', balance_type: 'roundrobin', backend_options: '', http_check_enabled: True, http_check: 'meth HEAD uri / ver HTTP/1.1 hdr Host localhost', http_check_expect: 'rstatus (2|3)[0-9][0-9]', allowed_networks: '192.168.1.0/24 192.168.2.0/24' }
|
||||
|
||||
haproxy_default_port: 80
|
||||
haproxy_terminate_tls: False
|
||||
haproxy_ssl_port: 443
|
||||
haproxy_admin_port: 8880
|
||||
haproxy_admin_socket: /run/haproxy/admin.sock
|
||||
|
||||
haproxy_install_additional_pkgs: False
|
||||
haproxy_additional_pkgs:
|
||||
- haproxyctl
|
||||
- haproxy-log-analysis
|
||||
|
||||
haproxy_nagios_check: False
|
||||
# It's a percentage
|
||||
haproxy_nagios_check_w: 70
|
||||
haproxy_nagios_check_c: 90
|
||||
|
||||
# Used by some other role as defaults, eg docker-swarm
|
||||
haproxy_spread_checks: 5
|
||||
haproxy_connect_timeout: 10s
|
||||
haproxy_client_timeout: 120s
|
||||
haproxy_server_timeout: 480s
|
||||
haproxy_global_keepalive_timeout: 10s
|
||||
haproxy_client_keepalive_timeout: 5184000s
|
||||
haproxy_backend_maxconn: 2048
|
||||
haproxy_check_interval: 3s
|
||||
haproxy_check_timeout: 2s
|
||||
haproxy_maxconns: 4096
|
||||
|
||||
haproxy_sysctl_conntrack_max: 131072
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
---
|
||||
- name: Generate haproxy config
|
||||
template:
|
||||
src: templates/haproxy.cfg.j2
|
||||
dest: "{{ target_path }}/haproxy.cfg"
|
||||
|
||||
- name: Generate haproxy-docker-swarm
|
||||
template:
|
||||
src: templates/haproxy-docker-swarm.yaml.j2
|
||||
dest: "{{ target_path }}/haproxy-swarm.yaml"
|
||||
|
||||
- name: Create the overlay network that will be joined by the proxied services
|
||||
docker_network:
|
||||
name: '{{ haproxy_docker_overlay_network }}'
|
||||
driver: overlay
|
||||
scope: swarm
|
|
@ -1,56 +0,0 @@
|
|||
version: '3.6'
|
||||
|
||||
services:
|
||||
haproxy:
|
||||
image: {{ haproxy_docker_image }}
|
||||
configs:
|
||||
- source: haproxy-config
|
||||
target: /usr/local/etc/haproxy/haproxy.cfg
|
||||
networks:
|
||||
- {{ haproxy_docker_overlay_network }}
|
||||
volumes:
|
||||
#- /etc/haproxy:/usr/local/etc/haproxy:ro
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
ports:
|
||||
- target: {{ haproxy_default_port }}
|
||||
published: {{ haproxy_default_port }}
|
||||
protocol: tcp
|
||||
mode: host
|
||||
- target: {{ haproxy_ssl_port }}
|
||||
published: {{ haproxy_ssl_port }}
|
||||
protocol: tcp
|
||||
mode: host
|
||||
- target: {{ haproxy_admin_port }}
|
||||
published: {{ haproxy_admin_port }}
|
||||
protocol: tcp
|
||||
mode: host
|
||||
dns: [127.0.0.11]
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
update_config:
|
||||
parallelism: 1
|
||||
delay: 20s
|
||||
placement:
|
||||
constraints:
|
||||
- "node.role==manager"
|
||||
restart_policy:
|
||||
condition: {{ haproxy_docker_restart_policy}}
|
||||
delay: 20s
|
||||
max_attempts: 5
|
||||
window: 120s
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2.0'
|
||||
memory: 768M
|
||||
reservations:
|
||||
cpus: '1.0'
|
||||
memory: 384M
|
||||
logging:
|
||||
driver: 'journald'
|
||||
configs:
|
||||
haproxy-config:
|
||||
file: ./haproxy.cfg
|
||||
networks:
|
||||
{{ haproxy_docker_overlay_network }}:
|
||||
external: true
|
|
@ -1,75 +0,0 @@
|
|||
global
|
||||
log fd@2 local2
|
||||
chroot /var/lib/haproxy
|
||||
pidfile /var/run/haproxy.pid
|
||||
maxconn 4000
|
||||
user haproxy
|
||||
group haproxy
|
||||
stats socket /var/lib/haproxy/stats expose-fd listeners
|
||||
master-worker
|
||||
|
||||
resolvers docker
|
||||
nameserver dns1 127.0.0.11:53
|
||||
resolve_retries 3
|
||||
timeout resolve 1s
|
||||
timeout retry 1s
|
||||
hold other 10s
|
||||
hold refused 10s
|
||||
hold nx 10s
|
||||
hold timeout 10s
|
||||
hold valid 10s
|
||||
hold obsolete 10s
|
||||
|
||||
defaults
|
||||
timeout connect 10s
|
||||
timeout client 30s
|
||||
timeout server 30s
|
||||
log global
|
||||
monitor-uri /_haproxy_health_check
|
||||
timeout http-keep-alive {{ haproxy_global_keepalive_timeout }}
|
||||
timeout connect {{ haproxy_connect_timeout }}
|
||||
timeout client {{ haproxy_client_timeout }}
|
||||
timeout server {{ haproxy_server_timeout }}
|
||||
timeout check {{ haproxy_check_timeout }}
|
||||
timeout http-request 10s # slowloris protection
|
||||
default-server inter 3s fall 2 rise 2 slowstart 60s
|
||||
|
||||
# Needed to preserve the stick tables
|
||||
peers mypeers
|
||||
peer local_haproxy 127.0.0.1:1024
|
||||
|
||||
frontend http
|
||||
|
||||
bind *:{{ haproxy_default_port }}
|
||||
|
||||
mode http
|
||||
option http-keep-alive
|
||||
|
||||
{% for srv in haproxy_docker_swarm_additional_services %}
|
||||
use_backend {{ srv.acl_name }}_bck if { {{ srv.acl_rule }} }
|
||||
{% endfor %}
|
||||
|
||||
|
||||
#
|
||||
# Backends
|
||||
#
|
||||
|
||||
{% for srv in haproxy_docker_swarm_additional_services %}
|
||||
backend {{ srv.acl_name }}_bck
|
||||
mode http
|
||||
option httpchk
|
||||
balance {{ srv.balance_type | default('roundrobin') }}
|
||||
{% if srv.http_check_enabled is defined and srv.http_check_enabled %}
|
||||
http-check send {{ srv.http_check }}
|
||||
http-check expect {{ srv.http_check_expect }}
|
||||
{% endif %}
|
||||
{% if srv.stick_sessions %}
|
||||
{% if srv.stick_on_cookie %}
|
||||
cookie {{ srv.stick_cookie }}
|
||||
{% else %}
|
||||
stick on src
|
||||
stick-table {{ srv.stick_table }}
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
server-template {{ srv.service_name }}- {{ srv.service_replica_num }} {{ srv.stack_name }}_{{ srv.service_name }}:{{ srv.service_port }} {{ srv.backend_options | default('') }} check resolvers docker init-addr libc,none
|
||||
{% endfor %}
|
|
@ -1,2 +0,0 @@
|
|||
---
|
||||
haproxy_docker_overlay_network: 'haproxy-public'
|
|
@ -1,5 +1,9 @@
|
|||
---
|
||||
target_path: /tmp/conductor_stack
|
||||
conductor_service: "conductor-server-{{ infrastructure }}"
|
||||
conductor_ui_service: "conductor-ui-{{ infrastructure }}"
|
||||
conductor_service_url: "http://{{ conductor_service }}:8080/api/"
|
||||
conductor_service_health_url: "http://{{ conductor_service }}:8080/health"
|
||||
target_path: "/tmp/conductor_stack_{{ infrastructure }}"
|
||||
conductor_network: conductor-network
|
||||
conductor_db: postgres
|
||||
init_db: True
|
||||
|
|
|
@ -0,0 +1,18 @@
|
|||
$ANSIBLE_VAULT;1.1;AES256
|
||||
62366130363930353837376531653565316531653234366233663032386266346338356335623537
|
||||
3765393265633163396330646365393865386130393661650a666264363165656539396365643465
|
||||
35313238313135363736386661633833333736396236303861383061313366613235623731356336
|
||||
3634376335626138370a646666343033316165343665633338316432636562323736626466376233
|
||||
64633738356663666563643465363137636261643639643035633931386631383436353936613334
|
||||
64333135643036336539313164386264643737636164613462646130393730393334626335333262
|
||||
30373231353061376565336366353938356338643432633664306632366436383262636333643961
|
||||
62613562666463633164313235366433616134613831393436303466366236323337323635616337
|
||||
34383634613736343034626330303661663662633661383734633834373464313137656461356562
|
||||
37336430633865656330623863396133613636316136613133633965353932333266663532356334
|
||||
35333138316339353236623963383739663730313737303838396538666338316366636537643663
|
||||
35366537353736343462383734663762393433666266303963306136626631653539396632326337
|
||||
39326266316532623232643437323238313765653261343630636339633936356138646262346634
|
||||
63363763306533363839386364646130396534383437366631343537303165326539393639613735
|
||||
39393364616361393435643531363462393633343437393936613861353266356230353338616163
|
||||
37373562393362356563623966313034653138616632336264343533363165313362306330386639
|
||||
32356363653031656465623463373337643930386361393839613139623530363635
|
|
@ -1,5 +1,15 @@
|
|||
---
|
||||
conductor_replicas: 2
|
||||
conductor_replicas: 1
|
||||
conductor_ui_replicas: 1
|
||||
conductor_image: nubisware/conductor-server:3.0.4
|
||||
conductor_ui_image: nubisware/conductor-ui-oauth2:3.0.4
|
||||
conductor_config: conductor-swarm-config.properties
|
||||
conductor_config_template: "{{ conductor_config }}.j2"
|
||||
|
||||
conductor_ui_clientid: "conductor-ui"
|
||||
conductor_ui_public_url: "http://conductor-ui"
|
||||
|
||||
#nw_cluster_conductor_ui_secret: in vault
|
||||
#dev_conductor_ui_secret: in vault
|
||||
#pre_conductor_ui_secret: in vault
|
||||
#prod_conductor_ui_secret: in vault
|
||||
|
|
|
@ -4,23 +4,14 @@
|
|||
src: templates/conductor-swarm.yaml.j2
|
||||
dest: "{{ target_path }}/conductor-swarm.yaml"
|
||||
|
||||
- name: Generate conductor config from dynomite seeds
|
||||
when: conductor_db is defined and conductor_db == 'dynomite'
|
||||
vars:
|
||||
seeds: "{{ lookup('file', '{{ target_path}}/seeds.list').splitlines() }}"
|
||||
- name: Generate local auth config
|
||||
when: conductor_auth is defined
|
||||
template:
|
||||
src: "templates/{{ conductor_config_template }}"
|
||||
dest: "{{ target_path }}/{{ conductor_config }}"
|
||||
src: "templates/{{ conductor_auth }}_auth.cfg.j2"
|
||||
dest: "{{ target_path }}/auth.cfg"
|
||||
|
||||
- name: Generate conductor config for JDBC DB
|
||||
when: conductor_db is not defined or conductor_db != 'dynomite'
|
||||
template:
|
||||
src: "templates/{{ conductor_config_template }}"
|
||||
dest: "{{ target_path }}/{{ conductor_config }}"
|
||||
|
||||
- name: Copy conductor SQL schema init for JDBC DB
|
||||
when: (conductor_db is not defined or conductor_db != 'dynomite') and init_db
|
||||
template:
|
||||
src: "templates/conductor-db-init-{{ conductor_db }}.sql.j2"
|
||||
dest: "{{ target_path }}/conductor-db-init.sql"
|
||||
|
||||
|
|
|
@ -1,92 +1,36 @@
|
|||
# Servers.
|
||||
conductor.jetty.server.enabled=true
|
||||
conductor.grpc.server.enabled=false
|
||||
conductor.grpc-server.enabled=false
|
||||
|
||||
# Database persistence model. Possible values are memory, redis, and dynomite.
|
||||
# If ommitted, the persistence used is memory
|
||||
#
|
||||
# memory : The data is stored in memory and lost when the server dies. Useful for testing or demo
|
||||
# redis : non-Dynomite based redis instance
|
||||
# dynomite : Dynomite cluster. Use this for HA configuration.
|
||||
# Database persistence type.
|
||||
{% if conductor_db is not defined or conductor_db == 'postgres' %}
|
||||
db=postgres
|
||||
jdbc.url={{ postgres_jdbc_url }}
|
||||
jdbc.username={{ postgres_jdbc_user }}
|
||||
jdbc.password={{ postgres_jdbc_pass }}
|
||||
conductor.{{ conductor_db }}.connection.pool.size.max=10
|
||||
conductor.{{ conductor_db }}.connection.pool.idle.min=2
|
||||
flyway.enabled=false
|
||||
|
||||
{% elif conductor_db is defined and conductor_db == 'mysql' %}
|
||||
db=mysql
|
||||
jdbc.url={{ mysql_jdbc_url }}
|
||||
jdbc.username={{ mysql_jdbc_user }}
|
||||
jdbc.password={{ mysql_jdbc_pass }}
|
||||
conductor.{{ conductor_db }}.connection.pool.size.max=10
|
||||
conductor.{{ conductor_db }}.connection.pool.idle.min=2
|
||||
flyway.enabled=false
|
||||
|
||||
|
||||
{% else %}
|
||||
db=dynomite
|
||||
|
||||
# Dynomite Cluster details.
|
||||
# format is host:port:rack separated by semicolon
|
||||
workflow.dynomite.cluster.hosts={% set ns = namespace() %}
|
||||
{% set ns.availability_zone = "" %}
|
||||
{% for seed in seeds %}
|
||||
{% set ns.seed_tokens = seed.split(':') %}
|
||||
{% if ns.availability_zone == "" %}
|
||||
{% set ns.availability_zone = ns.seed_tokens[2] %}
|
||||
{% endif %}
|
||||
{% if ns.availability_zone == ns.seed_tokens[2] %}
|
||||
{{ ns.seed_tokens[0] }}:8102:{{ ns.availability_zone }}{%- if not loop.last %};{%- endif %}
|
||||
{% endif %}
|
||||
{%- endfor %}
|
||||
|
||||
|
||||
# If you are running using dynomite, also add the following line to the property
|
||||
# to set the rack/availability zone of the conductor server to be same as dynomite cluster config
|
||||
EC2_AVAILABILTY_ZONE={{ ns.availability_zone }}
|
||||
|
||||
# Dynomite cluster name
|
||||
workflow.dynomite.cluster.name=dyno1
|
||||
|
||||
# Namespace for the keys stored in Dynomite/Redis
|
||||
workflow.namespace.prefix=conductor
|
||||
|
||||
# Namespace prefix for the dyno queues
|
||||
workflow.namespace.queue.prefix=conductor_queues
|
||||
|
||||
# No. of threads allocated to dyno-queues (optional)
|
||||
queues.dynomite.threads=3
|
||||
|
||||
# Non-quorum port used to connect to local redis. Used by dyno-queues.
|
||||
# When using redis directly, set this to the same port as redis server
|
||||
# For Dynomite, this is 22122 by default or the local redis-server port used by Dynomite.
|
||||
queues.dynomite.nonQuorum.port=22122
|
||||
conductor.db.type=postgres
|
||||
conductor.postgres.jdbcUrl={{ postgres_jdbc_url }}
|
||||
conductor.postgres.jdbcUsername={{ postgres_jdbc_user }}
|
||||
conductor.postgres.jdbcPassword={{ postgres_jdbc_pass }}
|
||||
flyway.baseline-on-migrate=true
|
||||
conductor.flyway.baseline-on-migrate=true
|
||||
{% endif %}
|
||||
|
||||
# Elastic search instance type. Possible values are memory and external.
|
||||
# If not specified, the instance type will be embedded in memory
|
||||
#
|
||||
# memory: The instance is created in memory and lost when the server dies. Useful for development and testing.
|
||||
# external: Elastic search instance runs outside of the server. Data is persisted and does not get lost when
|
||||
# the server dies. Useful for more stable environments like staging or production.
|
||||
workflow.elasticsearch.instanceType=external
|
||||
{% if conductor_db == 'mysql' %}
|
||||
conductor.db.type=mysql
|
||||
conductor.mysql.jdbcUrl={{ mysql_jdbc_url }}
|
||||
conductor.mysql.jdbcUsername={{ mysql_jdbc_user }}
|
||||
conductor.mysql.jdbcPassword={{ mysql_jdbc_pass }}
|
||||
{% endif %}
|
||||
|
||||
# Transport address to elasticsearch
|
||||
workflow.elasticsearch.url=elasticsearch:9300
|
||||
# Hikari pool sizes are -1 by default and prevent startup
|
||||
conductor.{{conductor_db}}.connectionPoolMaxSize=10
|
||||
conductor.{{conductor_db}}.connectionPoolMinIdle=2
|
||||
|
||||
# Name of the elasticsearch cluster
|
||||
|
||||
# Elastic search instance indexing is enabled.
|
||||
conductor.indexing.enabled=true
|
||||
conductor.elasticsearch.url=http://elasticsearch:9200
|
||||
workflow.elasticsearch.instanceType=EXTERNAL
|
||||
workflow.elasticsearch.index.name=conductor
|
||||
|
||||
# Additional modules (optional)
|
||||
# conductor.additional.modules=class_extending_com.google.inject.AbstractModule
|
||||
|
||||
# Additional modules for metrics collection (optional)
|
||||
# conductor.additional.modules=com.netflix.conductor.contribs.metrics.MetricsRegistryModule,com.netflix.conductor.contribs.metrics.LoggingMetricsModule
|
||||
# com.netflix.conductor.contribs.metrics.LoggingMetricsModule.reportPeriodSeconds=15
|
||||
|
||||
# Load sample kitchen sink workflow
|
||||
loadSample=false
|
||||
|
||||
#flyway.baseline-on-migrate=true
|
||||
|
||||
|
|
|
@ -3,31 +3,22 @@ version: '3.6'
|
|||
{% set clustered = (cluster_replacement is defined and cluster_replacement) or (cluster_check is defined and cluster_check) %}
|
||||
|
||||
services:
|
||||
conductor-server:
|
||||
{{ conductor_service }}:
|
||||
environment:
|
||||
- CONFIG_PROP={{ conductor_config }}
|
||||
image: nubisware/conductor-server
|
||||
image: "{{ conductor_image }}"
|
||||
networks:
|
||||
- {{ conductor_network }}
|
||||
{% if clustered %}
|
||||
- {{ haproxy_docker_overlay_network }}
|
||||
{% endif %}
|
||||
{% if not clustered %}
|
||||
ports:
|
||||
- "8080:8080"
|
||||
{% endif %}
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: {{ conductor_replicas }}
|
||||
{% if clustered %}
|
||||
endpoint_mode: dnsrr
|
||||
{% endif %}
|
||||
{% if infrastructure != 'local' %}
|
||||
placement:
|
||||
constraints: [node.role == worker]
|
||||
{% endif %}
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
window: 120s
|
||||
configs:
|
||||
- source: swarm-config
|
||||
|
@ -36,40 +27,39 @@ services:
|
|||
logging:
|
||||
driver: "journald"
|
||||
|
||||
conductor-ui:
|
||||
{{ conductor_ui_service }}:
|
||||
environment:
|
||||
- WF_SERVER=http://conductor-server:8080/api/
|
||||
image: nubisware/conductor-ui
|
||||
- WF_SERVER={{ conductor_service_url }}
|
||||
{% if conductor_auth is defined %}
|
||||
- AUTH_CONFIG_PATH=/app/config/auth.config
|
||||
{% endif %}
|
||||
image: "{{ conductor_ui_image }}"
|
||||
networks:
|
||||
- {{ conductor_network }}
|
||||
{% if clustered %}
|
||||
- {{ haproxy_docker_overlay_network }}
|
||||
{% endif %}
|
||||
{% if not clustered %}
|
||||
ports:
|
||||
- "5000:5000"
|
||||
{% if conductor_auth is defined %}
|
||||
configs:
|
||||
- source: auth-config
|
||||
target: /app/config/auth.config
|
||||
{% endif %}
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: {{ conductor_replicas }}
|
||||
{% if clustered %}
|
||||
endpoint_mode: dnsrr
|
||||
{% endif %}
|
||||
replicas: {{ conductor_ui_replicas }}
|
||||
{% if infrastructure != 'local' %}
|
||||
placement:
|
||||
constraints: [node.role == worker]
|
||||
{% endif %}
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
window: 120s
|
||||
|
||||
networks:
|
||||
{{ conductor_network }}:
|
||||
{% if clustered %}
|
||||
{{ haproxy_docker_overlay_network }}:
|
||||
external: True
|
||||
{% endif %}
|
||||
|
||||
configs:
|
||||
swarm-config:
|
||||
file: ./{{ conductor_config }}
|
||||
{% if conductor_auth is defined %}
|
||||
auth-config:
|
||||
file: ./auth.cfg
|
||||
{% endif %}
|
||||
|
|
|
@ -0,0 +1,23 @@
|
|||
{
|
||||
"strategy": "local",
|
||||
"strategySettings":{
|
||||
"users": {
|
||||
"admin": {
|
||||
"hash": "098039dd5e84e486f83eadefc31ce038ccc90d6d62323528181049371c9460b4",
|
||||
"salt": "salt",
|
||||
"displayName": "Admin",
|
||||
"email": "marco.lettere@nubisware.com",
|
||||
"roles": [ "admin", "viewer" ]
|
||||
}
|
||||
}
|
||||
},
|
||||
"audit": true,
|
||||
"acl": [
|
||||
"POST /(.*) admin",
|
||||
"PUT /(.*) admin",
|
||||
"DELETE /(.*) admin",
|
||||
"GET /api/(.*) viewer",
|
||||
"GET /(.*) *"
|
||||
]
|
||||
}
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
{
|
||||
"strategy": "oauth2",
|
||||
"strategySettings": {
|
||||
"authorizationURL": "{{ iam_host }}/auth/realms/d4science/protocol/openid-connect/auth",
|
||||
"tokenURL": "{{ iam_host }}/auth/realms/d4science/protocol/openid-connect/token",
|
||||
"clientID": "{{ conductor_ui_clientid }}",
|
||||
"clientSecret": "{{ conductor_ui_secret }}",
|
||||
"callbackURL": "{{ conductor_ui_public_url }}/login/callback",
|
||||
"logoutURL": "{{ iam_host }}/auth/realms/d4science/protocol/openid-connect/logout",
|
||||
"logoutCallbackURL": "{{ conductor_ui_public_url }}/logout/callback",
|
||||
"roles": [ "admin", "viewer" ]
|
||||
},
|
||||
"cookieSecret": "{{ conductor_ui_secret }}",
|
||||
"audit": true,
|
||||
"acl": [
|
||||
"POST /(.*) admin",
|
||||
"PUT /(.*) admin",
|
||||
"DELETE /(.*) admin",
|
||||
"GET /api/(.*) *",
|
||||
"GET /(.*) viewer"
|
||||
]
|
||||
}
|
||||
|
||||
|
|
@ -3,7 +3,7 @@ version: '3.6'
|
|||
services:
|
||||
|
||||
elasticsearch:
|
||||
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.8
|
||||
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.15
|
||||
environment:
|
||||
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
|
||||
- transport.host=0.0.0.0
|
||||
|
@ -18,9 +18,10 @@ services:
|
|||
deploy:
|
||||
mode: replicated
|
||||
replicas: {{ elasticsearch_replicas }}
|
||||
#endpoint_mode: dnsrr
|
||||
{% if infrastructure != 'local' %}
|
||||
placement:
|
||||
constraints: [node.role == worker]
|
||||
{% endif %}
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
|
|
|
@ -9,22 +9,14 @@ services:
|
|||
MYSQL_PASSWORD: {{ mysql_jdbc_pass }}
|
||||
MYSQL_ROOT_PASSWORD: {{ mysql_jdbc_pass }}
|
||||
MYSQL_DB: {{ mysql_jdbc_db }}
|
||||
{% if init_db %}
|
||||
configs:
|
||||
- source: db-init
|
||||
target: "/docker-entrypoint-initdb.d/db-init.sql"
|
||||
{% endif %}
|
||||
networks:
|
||||
- {{ conductor_network }}
|
||||
deploy:
|
||||
replicas: {{ mysql_replicas }}
|
||||
{% if infrastructure == 'local' %}
|
||||
placement:
|
||||
constraints: [node.role == worker]
|
||||
{% endif %}
|
||||
|
||||
networks:
|
||||
{{ conductor_network }}:
|
||||
{% if init_db %}
|
||||
configs:
|
||||
db-init:
|
||||
file: {{ target_path }}/conductor-db-init.sql
|
||||
{% endif %}
|
||||
|
|
|
@ -4,28 +4,17 @@ services:
|
|||
|
||||
{{ postgres_service_name }}:
|
||||
image: postgres
|
||||
ports:
|
||||
- "5432:5432"
|
||||
environment:
|
||||
POSTGRES_USER: "{{ postgres_jdbc_user }}"
|
||||
POSTGRES_PASSWORD: "{{ postgres_jdbc_pass }}"
|
||||
POSTGRES_DB: "{{ postgres_jdbc_db }}"
|
||||
{% if init_db %}
|
||||
configs:
|
||||
- source: db-init
|
||||
target: "/docker-entrypoint-initdb.d/db-init.sql"
|
||||
{% endif %}
|
||||
networks:
|
||||
- {{ conductor_network }}
|
||||
deploy:
|
||||
replicas: {{ postgres_replicas }}
|
||||
{% if infrastructure != 'local' %}
|
||||
placement:
|
||||
constraints: [node.role == worker]
|
||||
|
||||
{% endif %}
|
||||
networks:
|
||||
{{ conductor_network }}:
|
||||
{% if init_db %}
|
||||
configs:
|
||||
db-init:
|
||||
file: {{ target_path }}/conductor-db-init.sql
|
||||
{% endif %}
|
||||
|
|
|
@ -3,7 +3,7 @@ version: '3.6'
|
|||
services:
|
||||
|
||||
elasticsearch:
|
||||
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.8
|
||||
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.15
|
||||
environment:
|
||||
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
|
||||
- transport.host=0.0.0.0
|
||||
|
@ -18,13 +18,13 @@ services:
|
|||
deploy:
|
||||
mode: replicated
|
||||
replicas: {{ elasticsearch_replicas }}
|
||||
#endpoint_mode: dnsrr
|
||||
{% if infrastructure != 'local' %}
|
||||
placement:
|
||||
constraints: [node.role == worker]
|
||||
{% endif %}
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
window: 120s
|
||||
|
||||
networks:
|
||||
|
|
|
@ -0,0 +1,9 @@
|
|||
---
|
||||
use_jdbc: True
|
||||
postgres_host: "postgresql-srv.d4science.org"
|
||||
conductor_db: "postgres"
|
||||
postgres_jdbc_user: "conductor_u"
|
||||
postgres_jdbc_pass: '{{ jdbc_pass }}'
|
||||
jdbc_db: "conductor"
|
||||
postgres_jdbc_url: "jdbc:postgresql://{{ postgres_host }}:5432/{{ jdbc_db }}"
|
||||
|
|
@ -0,0 +1 @@
|
|||
#jdbc_pass: "secret"
|
|
@ -0,0 +1,7 @@
|
|||
$ANSIBLE_VAULT;1.1;AES256
|
||||
39303332663366633565666361663463353562636165643464313163633432373339633735656333
|
||||
3331323435653762303366303238333835623762313133360a646466383563383832313662356239
|
||||
32353134636636393433663638383639396365373736383135363133656161336165373864363566
|
||||
3831666263643664320a386632666439306337383139613861353534313334303065303164616231
|
||||
65343235343065383734643239666266626432393839656334383462336533383865646662646636
|
||||
6333633164653139653337356135353264376363353532313536
|
|
@ -1,10 +0,0 @@
|
|||
---
|
||||
use_jdbc: True
|
||||
mysql_image_name: 'mariadb'
|
||||
mysql_service_name: 'mysqldb'
|
||||
mysql_replicas: 1
|
||||
conductor_db: mysql
|
||||
jdbc_user: conductor
|
||||
jdbc_pass: password
|
||||
jdbc_db: conductor
|
||||
jdbc_url: jdbc:mysql://{{ mysql_service_name }}:3306/{{ mysql_jdbc_db }}?useSSL=false&allowPublicKeyRetrieval=true
|
|
@ -1,5 +0,0 @@
|
|||
---
|
||||
- name: "Generate mysql swarm, image used: {{ mysql_image_name }}"
|
||||
template:
|
||||
src: templates/mysql-swarm.yaml.j2
|
||||
dest: "{{ target_path }}/mysql-swarm.yaml"
|
|
@ -1,30 +0,0 @@
|
|||
version: '3.6'
|
||||
|
||||
services:
|
||||
|
||||
{{ mysql_service_name }}:
|
||||
image: {{ mysql_image_name }}
|
||||
environment:
|
||||
MYSQL_USER: {{ mysql_jdbc_user }}
|
||||
MYSQL_PASSWORD: {{ mysql_jdbc_pass }}
|
||||
MYSQL_ROOT_PASSWORD: {{ mysql_jdbc_pass }}
|
||||
MYSQL_DB: {{ jdbc_db }}
|
||||
{% if init_db %}
|
||||
configs:
|
||||
- source: db-init
|
||||
target: "/docker-entrypoint-initdb.d/db-init.sql"
|
||||
{% endif %}
|
||||
networks:
|
||||
- {{ conductor_network }}
|
||||
deploy:
|
||||
replicas: {{ mysql_replicas }}
|
||||
placement:
|
||||
constraints: [node.role == worker]
|
||||
|
||||
networks:
|
||||
{{ conductor_network }}:
|
||||
{% if init_db %}
|
||||
configs:
|
||||
db-init:
|
||||
file: {{ target_path }}/conductor-db-init.sql
|
||||
{% endif %}
|
|
@ -0,0 +1,4 @@
|
|||
pep_port: 80
|
||||
pep_replicas: 1
|
||||
# hostnames to be used as vhosts
|
||||
#pep_credentials: in vault
|
|
@ -0,0 +1,24 @@
|
|||
$ANSIBLE_VAULT;1.1;AES256
|
||||
63653037396633613264356337303461626364643463616264616333313065336263626665646233
|
||||
3861663135613138333863343261373464326239303835650a643535633265653339376332663462
|
||||
35306231383136623339313436343732666332333435383162366135386663363063376466636233
|
||||
6233353263663839310a623233353138373734356465653965376132643137643738363430333861
|
||||
63336132646562343639666334616633356631366535343561646434323130633135393535383061
|
||||
38313337303261396364653663316462376337393837373038623266633831303564646539326665
|
||||
30303065363335346538643436613030336163336535383665623533303535623064376539363062
|
||||
33393137376263383335363632633836626137346663613934346136306436353230663934633637
|
||||
32356234386161393937303563343931373939623737636466363936393438353666326663373038
|
||||
66343339353430393065346237626434356462653330313064303166366239343636636661633438
|
||||
38613863386666343638663762303531326531633062343132663462333137373062646339623961
|
||||
35666164313962356139623839323161303131306132633139303463393661636165353566373561
|
||||
37333963386332386635616332326239386639636434376232356465366131306366376464366433
|
||||
33323839326366653261636665623136336564373333313135313661633536333837353163373334
|
||||
32366532373239303263386565363236383036623333353662303031373335653032646166386262
|
||||
33656266356164666130343135386263346533393533386166306666366137313231386434343434
|
||||
31653633303133323031343566663834636565313235323863353963363633346264636339653463
|
||||
34353834343836306633346638313066316162373239326435313532643764306461663965303236
|
||||
31386331303334636636623035303236303265633839323963633066633932336335326561623334
|
||||
34366565393434393131656564646132343964653637393739613837313561646238646631316265
|
||||
32303865633862386162393161336533313465326632363463653831623961633039393932623633
|
||||
63613730663131343463316436326437393931343566373533666638366631333264353939343862
|
||||
306362633430393061666539616565383366
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
- name: Generate PEP config
|
||||
template:
|
||||
src: templates/nginx.conf.j2
|
||||
dest: "{{ target_path }}/nginx.conf"
|
||||
|
||||
- name: Generate PEP default config
|
||||
when: pep is defined and pep == True
|
||||
template:
|
||||
src: templates/nginx.default.conf.j2
|
||||
dest: "{{ target_path }}/nginx.default.conf"
|
||||
|
||||
- name: Generate PEP default config
|
||||
when: pep is not defined or pep == False
|
||||
template:
|
||||
src: templates/nginx.default.conf.nopep.j2
|
||||
dest: "{{ target_path }}/nginx.default.conf"
|
||||
|
||||
- name: Generate config.js
|
||||
when: pep is defined and pep == True
|
||||
template:
|
||||
src: templates/config.js.j2
|
||||
dest: "{{ target_path }}/config.js"
|
||||
|
||||
- name: Generate pep.js
|
||||
when: pep is defined and pep == True
|
||||
template:
|
||||
src: templates/pep.js.j2
|
||||
dest: "{{ target_path }}/pep.js"
|
||||
|
||||
- name: Generate pep-docker-swarm
|
||||
template:
|
||||
src: templates/pep-swarm.yaml.j2
|
||||
dest: "{{ target_path }}/pep-swarm.yaml"
|
||||
|
||||
- name: Generate pep-docker-swarm when behind HA proxy
|
||||
when: ha_network is defined and ha_network == True
|
||||
template:
|
||||
src: templates/pep-swarm-ha_network.yaml.j2
|
||||
dest: "{{ target_path }}/pep-swarm.yaml"
|
||||
|
|
@ -0,0 +1,99 @@
|
|||
export default { config };
|
||||
|
||||
var config = {
|
||||
"pep-credentials" : "{{ pep_credentials }}",
|
||||
"hosts" : [
|
||||
{
|
||||
"host": "{{ conductor_server_name }}",
|
||||
"audience" : "conductor-server",
|
||||
"allow-basic-auth" : true,
|
||||
"pip" : [ { claim: "context", operator : "get-contexts" } ],
|
||||
"paths" : [
|
||||
{
|
||||
"name" : "metadata",
|
||||
"path" : "^/api/metadata/(taskdefs|workflow)/?.*$",
|
||||
"methods" : [
|
||||
{
|
||||
"method" : "GET",
|
||||
"scopes" : ["get","list"]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name" : "metadata.taskdefs",
|
||||
"path" : "^/api/metadata/taskdefs/?.*$",
|
||||
"methods" : [
|
||||
{
|
||||
"method" : "POST",
|
||||
"scopes" : ["create"]
|
||||
},
|
||||
{
|
||||
"method" : "DELETE",
|
||||
"scopes" : ["delete"],
|
||||
},
|
||||
{
|
||||
"method" : "PUT",
|
||||
"scopes" : ["update"],
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name" : "metadata.workflow",
|
||||
"path" : "^/api/metadata/workflow/?.*$",
|
||||
"methods" : [
|
||||
{
|
||||
"method" : "POST",
|
||||
"scopes" : ["create"]
|
||||
},
|
||||
{
|
||||
"method" : "DELETE",
|
||||
"scopes" : ["delete"],
|
||||
},
|
||||
{
|
||||
"method" : "PUT",
|
||||
"scopes" : ["update"],
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name" : "workflow",
|
||||
"path" : "^/api/workflow/?.*$",
|
||||
"methods" : [
|
||||
{
|
||||
"method" : "GET",
|
||||
"scopes" : ["get"],
|
||||
},
|
||||
{
|
||||
"method" : "POST",
|
||||
"scopes" : ["start"],
|
||||
},
|
||||
{
|
||||
"method" : "DELETE",
|
||||
"scopes" : ["terminate"],
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name" : "task",
|
||||
"path" : "^/api/tasks/poll/.+$",
|
||||
"methods" : [
|
||||
{
|
||||
"method" : "GET",
|
||||
"scopes" : ["poll"],
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name" : "task",
|
||||
"path" : "^/api/tasks[/]?$",
|
||||
"methods" : [
|
||||
{
|
||||
"method" : "POST",
|
||||
"scopes" : ["update"],
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
|
@ -0,0 +1,18 @@
|
|||
load_module modules/ngx_http_js_module.so;
|
||||
|
||||
worker_processes 1;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
|
||||
{% if pep is defined and pep == True %}
|
||||
js_import pep.js;
|
||||
js_set $authorization pep.enforce;
|
||||
proxy_cache_path /var/cache/nginx/pep keys_zone=token_responses:1m max_size=2m;
|
||||
{% endif %}
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
include /etc/nginx/sites-enabled/*;
|
||||
}
|
|
@ -0,0 +1,109 @@
|
|||
upstream _conductor-server {
|
||||
ip_hash;
|
||||
server {{ conductor_service }}:8080;
|
||||
}
|
||||
|
||||
upstream _conductor-ui {
|
||||
ip_hash;
|
||||
server {{ conductor_ui_service }}:5000;
|
||||
}
|
||||
|
||||
map $http_authorization $source_auth {
|
||||
default "";
|
||||
}
|
||||
|
||||
js_var $auth_token;
|
||||
js_var $pep_credentials;
|
||||
|
||||
server {
|
||||
|
||||
listen *:80;
|
||||
listen [::]:80;
|
||||
server_name {{ conductor_server_name }};
|
||||
|
||||
{% if conductor_server_name != conductor_ui_server_name %}
|
||||
# When there is the possibility to separate vhosts for ui and apis as in local-site deployment forward also / to swagger docs
|
||||
location / {
|
||||
proxy_set_header Host $host;
|
||||
proxy_pass http://_conductor-server;
|
||||
}
|
||||
{% endif %}
|
||||
|
||||
location /health {
|
||||
proxy_set_header Host $host;
|
||||
proxy_pass http://_conductor-server;
|
||||
}
|
||||
|
||||
location /api/ {
|
||||
js_content pep.enforce;
|
||||
}
|
||||
|
||||
location @backend {
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Host $host;
|
||||
proxy_set_header X-Forwarded-Server $host;
|
||||
proxy_set_header X-Forwarded-Port $server_port;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Original-URI $request_uri;
|
||||
proxy_pass http://_conductor-server;
|
||||
}
|
||||
|
||||
location /jwt_verify_request {
|
||||
internal;
|
||||
proxy_method POST;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Authorization $pep_credentials;
|
||||
proxy_set_header Content-Type "application/x-www-form-urlencoded";
|
||||
proxy_pass "{{ iam_host }}/auth/realms/d4science/protocol/openid-connect/token/introspect";
|
||||
|
||||
proxy_ignore_headers Cache-Control Expires Set-Cookie;
|
||||
gunzip on;
|
||||
|
||||
proxy_cache token_responses; # Enable caching
|
||||
proxy_cache_key $source_auth; # Cache for each source authentication
|
||||
proxy_cache_lock on; # Duplicate tokens must wait
|
||||
proxy_cache_valid 200 10s; # How long to use each response
|
||||
}
|
||||
|
||||
location /jwt_request {
|
||||
internal;
|
||||
proxy_method POST;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Authorization $pep_credentials;
|
||||
proxy_set_header Content-Type "application/x-www-form-urlencoded";
|
||||
proxy_pass "{{ iam_host }}/auth/realms/d4science/protocol/openid-connect/token";
|
||||
gunzip on;
|
||||
}
|
||||
|
||||
location /permission_request {
|
||||
internal;
|
||||
proxy_method POST;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Content-Type "application/x-www-form-urlencoded";
|
||||
proxy_set_header Authorization "Bearer $auth_token";
|
||||
proxy_pass "{{ iam_host }}/auth/realms/d4science/protocol/openid-connect/token";
|
||||
gunzip on;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
server {
|
||||
|
||||
listen *:80 default_server;
|
||||
listen [::]:80 default_server;
|
||||
server_name {{ conductor_ui_server_name }};
|
||||
|
||||
location / {
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Host $host;
|
||||
proxy_set_header X-Forwarded-Server $host;
|
||||
proxy_set_header X-Forwarded-Port $server_port;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_pass http://_conductor-ui;
|
||||
}
|
||||
|
||||
}
|
|
@ -0,0 +1,41 @@
|
|||
upstream _conductor-server {
|
||||
ip_hash;
|
||||
server {{ conductor_service }}:8080;
|
||||
}
|
||||
|
||||
upstream _conductor-ui {
|
||||
ip_hash;
|
||||
server {{ conductor_ui_service}}:5000;
|
||||
}
|
||||
|
||||
server {
|
||||
|
||||
listen *:80;
|
||||
listen [::]:80;
|
||||
server_name conductor-server;
|
||||
|
||||
location / {
|
||||
proxy_set_header Host $host;
|
||||
proxy_pass http://_conductor-server;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
server {
|
||||
|
||||
listen *:80 default_server;
|
||||
listen [::]:80 default_server;
|
||||
server_name conductor-ui;
|
||||
|
||||
location / {
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Host $host;
|
||||
proxy_set_header X-Forwarded-Server $host;
|
||||
proxy_set_header X-Forwarded-Port $server_port;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_pass http://_conductor-ui;
|
||||
}
|
||||
|
||||
}
|
|
@ -0,0 +1,46 @@
|
|||
version: '3.6'
|
||||
|
||||
services:
|
||||
|
||||
pep:
|
||||
image: nginx:stable-alpine
|
||||
networks:
|
||||
- {{ conductor_network }}
|
||||
- haproxy-public
|
||||
deploy:
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints: [node.role == worker]
|
||||
endpoint_mode: dnsrr
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 10s
|
||||
window: 120s
|
||||
configs:
|
||||
- source: nginxconf
|
||||
target: /etc/nginx/templates/default.conf.template
|
||||
- source: nginxbaseconf
|
||||
target: /etc/nginx/nginx.conf
|
||||
{% if pep is defined and pep == True %}
|
||||
- source: pep
|
||||
target: /etc/nginx/pep.js
|
||||
- source: pepconfig
|
||||
target: /etc/nginx/config.js
|
||||
{% endif %}
|
||||
|
||||
networks:
|
||||
{{ conductor_network }}:
|
||||
haproxy-public:
|
||||
external: true
|
||||
configs:
|
||||
nginxconf:
|
||||
file: ./nginx.default.conf
|
||||
nginxbaseconf:
|
||||
file: ./nginx.conf
|
||||
{% if pep is defined and pep == True %}
|
||||
pep:
|
||||
file: ./pep.js
|
||||
pepconfig:
|
||||
file: ./config.js
|
||||
{% endif %}
|
||||
|
|
@ -0,0 +1,46 @@
|
|||
version: '3.6'
|
||||
|
||||
services:
|
||||
|
||||
pep:
|
||||
image: nginx:stable-alpine
|
||||
networks:
|
||||
- {{ conductor_network }}
|
||||
ports:
|
||||
- "{{ pep_port }}:80"
|
||||
deploy:
|
||||
replicas: {{ pep_replicas }}
|
||||
{% if infrastructure != 'local' %}
|
||||
placement:
|
||||
constraints: [node.role != worker]
|
||||
{% endif %}
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 10s
|
||||
window: 120s
|
||||
configs:
|
||||
- source: nginxconf
|
||||
target: /etc/nginx/templates/default.conf.template
|
||||
- source: nginxbaseconf
|
||||
target: /etc/nginx/nginx.conf
|
||||
{% if pep is defined and pep == True %}
|
||||
- source: pep
|
||||
target: /etc/nginx/pep.js
|
||||
- source: pepconfig
|
||||
target: /etc/nginx/config.js
|
||||
{% endif %}
|
||||
|
||||
networks:
|
||||
{{ conductor_network }}:
|
||||
|
||||
configs:
|
||||
nginxconf:
|
||||
file: ./nginx.default.conf
|
||||
nginxbaseconf:
|
||||
file: ./nginx.conf
|
||||
{% if pep is defined and pep == True %}
|
||||
pep:
|
||||
file: ./pep.js
|
||||
pepconfig:
|
||||
file: ./config.js
|
||||
{% endif %}
|
|
@ -0,0 +1,325 @@
|
|||
export default { enforce };
|
||||
|
||||
import defaultExport from './config.js';
|
||||
|
||||
function log(c, s){
|
||||
c.request.error(s)
|
||||
}
|
||||
|
||||
function enforce(r) {
|
||||
|
||||
var context = {
|
||||
request: r ,
|
||||
config : defaultExport["config"],
|
||||
backend : (defaultExport.backend ? defaultExport.backend : "@backend"),
|
||||
export_backend_headers : (defaultExport.backendHeaders ? defaultExport.backendHeaders : wkf.export_backend_headers)
|
||||
}
|
||||
|
||||
log(context, "Inside NJS enforce for " + r.method + " @ " + r.headersIn.host + "/" + r.uri)
|
||||
|
||||
context = computeProtection(context)
|
||||
|
||||
wkf.run(wkf.build(context), context)
|
||||
}
|
||||
|
||||
// ######## WORKFLOW FUNCTIONS ###############
|
||||
var wkf = {
|
||||
|
||||
build : (context)=>{
|
||||
var actions = [
|
||||
"export_pep_credentials",
|
||||
"parse_authentication",
|
||||
"check_authentication",
|
||||
"export_authn_token",
|
||||
"pip",
|
||||
"pdp",
|
||||
"export_backend_headers",
|
||||
"pass"
|
||||
]
|
||||
return actions
|
||||
},
|
||||
|
||||
run : (actions, context) => {
|
||||
context.request.error("Starting workflow with " + njs.dump(actions))
|
||||
var w = actions.reduce(
|
||||
(acc, f) => acc.then(typeof(f) === "function" ? f : wkf[f]),
|
||||
Promise.resolve().then(()=>context)
|
||||
)
|
||||
w.catch(e => { context.request.error(njs.dump(e)); context.request.return(401)} )
|
||||
},
|
||||
|
||||
export_pep_credentials : exportPepCredentials,
|
||||
export_authn_token : exportAuthToken,
|
||||
export_backend_headers : c=>c,
|
||||
parse_authentication : parseAuthentication,
|
||||
check_authentication : checkAuthentication,
|
||||
verify_token : verifyToken,
|
||||
request_token : requestToken,
|
||||
pip : pipExecutor,
|
||||
pdp : pdpExecutor,
|
||||
pass : pass,
|
||||
|
||||
//PIP utilities
|
||||
"get-path-component" : (c, i) => c.request.uri.split("/")[i],
|
||||
"get-token-field" : getTokenField,
|
||||
"get-contexts" : (c) => {
|
||||
var ra = c.authn.verified_token["resource_access"]
|
||||
if(ra){
|
||||
var out = [];
|
||||
for(var k in ra){
|
||||
if(ra[k].roles && ra[k].roles.length !== 0) out.push(k)
|
||||
}
|
||||
}
|
||||
return out;
|
||||
}
|
||||
}
|
||||
|
||||
function getTokenField(context, f){
|
||||
return context.authn.verified_token[f]
|
||||
}
|
||||
|
||||
function exportVariable(context, name, value){
|
||||
context.request.variables[name] = value
|
||||
log(context, "Exported variables:" + njs.dump(context.request.variables))
|
||||
return context
|
||||
}
|
||||
|
||||
function exportPepCredentials(context){
|
||||
if(!context.config["pep-credentials"]){
|
||||
throw new Error("Need PEP credentials")
|
||||
}
|
||||
return exportVariable(context, "pep_credentials", "Basic " + context.config["pep-credentials"])
|
||||
}
|
||||
|
||||
function exportAuthToken(context){
|
||||
return exportVariable(context, "auth_token", context.authn.token)
|
||||
}
|
||||
|
||||
function checkAuthentication(context){
|
||||
return context.authn.type === "bearer" ? wkf.verify_token(context) : wkf.request_token(context)
|
||||
}
|
||||
|
||||
function parseAuthentication(context){
|
||||
context.request.log("Inside parseAuthentication")
|
||||
var incomingauth = context.request.headersIn["Authorization"]
|
||||
|
||||
if(!incomingauth) throw new Error("Authentication required");
|
||||
|
||||
var arr = incomingauth.trim().replace(/\s\s+/g, " ").split(" ")
|
||||
if(arr.length != 2) throw new Error("Unknown authentication scheme");
|
||||
|
||||
var type = arr[0].toLowerCase()
|
||||
if(type === "basic" && context.authz.host && context.authz.host["allow-basic-auth"]){
|
||||
var unamepass = Buffer.from(arr[1], 'base64').toString().split(":")
|
||||
if(unamepass.length != 2) return null;
|
||||
context.authn = { type : type, raw : arr[1], user : unamepass[0], password : unamepass[1]}
|
||||
return context
|
||||
}else if(type === "bearer"){
|
||||
context.authn = { type : type, raw : arr[1], token : arr[1]}
|
||||
return context
|
||||
}
|
||||
throw new Error("Unknown authentication scheme");
|
||||
}
|
||||
|
||||
function verifyToken(context){
|
||||
log(context, "Inside verifyToken")
|
||||
var options = {
|
||||
"body" : "token=" + context.authn.token + "&token_type_hint=access_token"
|
||||
}
|
||||
return context.request.subrequest("/jwt_verify_request", options)
|
||||
.then(reply=>{
|
||||
if (reply.status === 200) {
|
||||
var response = null
|
||||
try{
|
||||
response = JSON.parse(reply.responseBody);
|
||||
} catch(error){
|
||||
throw new Error("Unable to parse response json from token request: " + reply.responseBody)
|
||||
}
|
||||
if (response.active === true) {
|
||||
return response
|
||||
} else {
|
||||
throw new Error("Unauthorized")
|
||||
}
|
||||
} else {
|
||||
throw new Error("Unauthorized")
|
||||
}
|
||||
}).then(verified_token => {
|
||||
context.authn.verified_token =
|
||||
JSON.parse(Buffer.from(context.authn.token.split('.')[1], 'base64url').toString())
|
||||
return context
|
||||
})
|
||||
}
|
||||
|
||||
function requestToken(context){
|
||||
log(context, "Inside requestToken")
|
||||
var options = {
|
||||
"body" : "grant_type=client_credentials&client_id="+context.authn.user+"&client_secret="+context.authn.password
|
||||
}
|
||||
return context.request.subrequest("/jwt_request", options)
|
||||
.then(reply=>{
|
||||
if (reply.status === 200) {
|
||||
var response = null
|
||||
try{
|
||||
response = JSON.parse(reply.responseBody);
|
||||
} catch(error){
|
||||
throw new Error("Unable to parse response json from token request: " + reply.responseBody)
|
||||
}
|
||||
context.authn.token = response.access_token
|
||||
context.authn.verified_token =
|
||||
JSON.parse(Buffer.from(context.authn.token.split('.')[1], 'base64url').toString())
|
||||
return context
|
||||
} else if (reply.status === 400 || reply.status === 401){
|
||||
var options = {
|
||||
"body" : "grant_type=password&username="+context.authn.user+"&password="+context.authn.password
|
||||
}
|
||||
return context.request.subrequest("/jwt_request", options)
|
||||
.then( reply=>{
|
||||
if (reply.status === 200) {
|
||||
var response = JSON.parse(reply.responseBody);
|
||||
context.authn.token = response.access_token
|
||||
context.authn.verified_token =
|
||||
JSON.parse(Buffer.from(context.authn.token.split('.')[1], 'base64url').toString())
|
||||
return context
|
||||
} else{
|
||||
throw new Error("Unauthorized " + reply.status)
|
||||
}
|
||||
})
|
||||
} else {
|
||||
throw new Error("Unauthorized " + reply.status)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
function pipExecutor(context){
|
||||
log(context, "Inside extra claims PIP")
|
||||
context.authz.pip.forEach(extra =>{
|
||||
//call extra claim pip function
|
||||
try{
|
||||
var operator = extra.operator
|
||||
var result = wkf[operator](context, extra.args)
|
||||
//ensure array and add to extra_claims
|
||||
if(!(result instanceof Array)) result = [result]
|
||||
if(!context.extra_claims) context.extra_claims = {};
|
||||
context.extra_claims[extra.claim] = result
|
||||
} catch (error){
|
||||
log(context, "Skipping invalid extra claim " + njs.dump(error))
|
||||
}
|
||||
})
|
||||
log(context, "Extra claims are " + njs.dump(context.extra_claims))
|
||||
return context
|
||||
}
|
||||
|
||||
function pdpExecutor(context){
|
||||
log(context, "Inside PDP")
|
||||
return context.authz.pdp(context)
|
||||
}
|
||||
|
||||
function umaCall(context){
|
||||
log(context, "Inside UMA call")
|
||||
var options = { "body" : computePermissionRequestBody(context) };
|
||||
return context.request.subrequest("/permission_request", options)
|
||||
.then(reply =>{
|
||||
if(reply.status === 200){
|
||||
return context
|
||||
}else{
|
||||
throw new Error("Response for authorization request is not ok " + reply.status + " " + njs.dump(reply.responseBody))
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
function pass(context){
|
||||
log(context, "Inside pass");
|
||||
if(typeof(context.backend) === "string") context.request.internalRedirect(context.backend);
|
||||
else if (typeof(context.backend) === "function") context.request.internalRedirect(context.backend(context))
|
||||
return context;
|
||||
}
|
||||
|
||||
// ######## AUTHORIZATION PART ###############
|
||||
function computePermissionRequestBody(context){
|
||||
|
||||
if(!context.authz.host || !context.authz.path ){
|
||||
throw new Error("Enforcemnt mode is always enforcing. Host or path not found...")
|
||||
}
|
||||
|
||||
var audience = computeAudience(context)
|
||||
var grant = "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket"
|
||||
var mode = "response_mode=decision"
|
||||
var permissions = computePermissions(context)
|
||||
var extra = ""
|
||||
if(context.extra_claims){
|
||||
extra =
|
||||
"claim_token_format=urn:ietf:params:oauth:token-type:jwt&claim_token=" +
|
||||
JSON.stringify(context.extra_claims).toString("base64url")
|
||||
}
|
||||
var body = audience + "&" + grant + "&" + permissions + "&" + mode + "&" + extra
|
||||
context.request.error("Computed permission request body is " + body)
|
||||
return body
|
||||
}
|
||||
|
||||
function computeAudience(context){
|
||||
var aud = context.request.headersIn.host
|
||||
if(context.authz.host){
|
||||
aud = context.authz.host.audience||context.authz.host.host
|
||||
}
|
||||
return "audience=" + aud
|
||||
}
|
||||
|
||||
function computePermissions(context){
|
||||
var resource = context.request.uri
|
||||
if(context.authz.path){
|
||||
resource = context.authz.path.name||context.authz.path.path
|
||||
}
|
||||
var scopes = []
|
||||
if(context.authz.method && context.authz.method.scopes){
|
||||
scopes = context.authz.method.scopes
|
||||
}
|
||||
if(scopes.length > 0){
|
||||
return scopes.map(s=>"permission=" + resource + "#" + s).join("&")
|
||||
}
|
||||
return "permission=" + resource
|
||||
}
|
||||
|
||||
function getPath(hostconfig, incomingpath, incomingmethod){
|
||||
var paths = hostconfig.paths || []
|
||||
var matchingpaths = paths
|
||||
.filter(p => {return incomingpath.match(p.path) != null})
|
||||
.reduce((acc, p) => {
|
||||
if (!p.methods || p.methods.length === 0) acc.weak.push({ path: p});
|
||||
else{
|
||||
var matchingmethods = p.methods.filter(m=>m.method.toUpperCase() === incomingmethod)
|
||||
if(matchingmethods.length > 0) acc.strong.push({ method : matchingmethods[0], path: p});
|
||||
}
|
||||
return acc;
|
||||
}, { strong: [], weak: []})
|
||||
return matchingpaths.strong.concat(matchingpaths.weak)[0]
|
||||
}
|
||||
|
||||
function getHost(config, host){
|
||||
var matching = config.hosts.filter(h=>{
|
||||
return h.host === host
|
||||
})
|
||||
return matching.length > 0 ? matching[0] : null
|
||||
}
|
||||
|
||||
function computeProtection(context){
|
||||
log(context, "Getting by host " + context.request.headersIn.host)
|
||||
context.authz = {}
|
||||
context.authz.host = getHost(context.config, context.request.headersIn.host)
|
||||
if(context.authz.host !== null){
|
||||
context.authz.pip = context.authz.host.pip ? context.authz.host.pip : [];
|
||||
context.authz.pdp = context.authz.host.pdp ? context.authz.host.pdp : umaCall;
|
||||
var pathandmethod = getPath(context.authz.host, context.request.uri, context.request.method);
|
||||
if(pathandmethod){
|
||||
context.authz.path = pathandmethod.path;
|
||||
context.authz.pip = context.authz.path.pip ? context.authz.pip.concat(context.authz.path.pip) : context.authz.pip;
|
||||
context.authz.pdp = context.authz.path.pdp ? context.authz.path.pdp : context.authz.pdp;
|
||||
context.authz.method = pathandmethod.method;
|
||||
if(context.authz.method){
|
||||
context.authz.pip = context.authz.method.pip ? context.authz.pip.concat(context.authz.method.pip) : context.authz.pip;
|
||||
context.authz.pdp = context.authz.method.pdp ? context.authz.method.pdp : context.authz.pdp;
|
||||
}
|
||||
}
|
||||
}
|
||||
log(context, "Leaving protection computation: ")
|
||||
return context
|
||||
}
|
|
@ -1,9 +0,0 @@
|
|||
---
|
||||
use_jdbc: True
|
||||
postgres_service_name: 'postgresdb'
|
||||
postgres_replicas: 1
|
||||
conductor_db: postgres
|
||||
jdbc_user: conductor
|
||||
jdbc_pass: password
|
||||
jdbc_db: conductor
|
||||
jdbc_url: jdbc:postgresql://{{ postgres_service_name }}:5432/{{ postgres_jdbc_db }}
|
|
@ -1,5 +0,0 @@
|
|||
---
|
||||
- name: Generate postgres swarm
|
||||
template:
|
||||
src: templates/postgres-swarm.yaml.j2
|
||||
dest: "{{ target_path }}/postgres-swarm.yaml"
|
|
@ -1,31 +0,0 @@
|
|||
version: '3.6'
|
||||
|
||||
services:
|
||||
|
||||
{{ postgres_service_name }}:
|
||||
image: postgres
|
||||
ports:
|
||||
- "5432:5432"
|
||||
environment:
|
||||
POSTGRES_USER: "{{ postgres_jdbc_user }}"
|
||||
POSTGRES_PASSWORD: "{{ postgres_jdbc_pass }}"
|
||||
POSTGRES_DB: "{{ postgres_jdbc_db }}"
|
||||
{% if init_db %}
|
||||
configs:
|
||||
- source: db-init
|
||||
target: "/docker-entrypoint-initdb.d/db-init.sql"
|
||||
{% endif %}
|
||||
networks:
|
||||
- {{ conductor_network }}
|
||||
deploy:
|
||||
replicas: {{ postgres_replicas }}
|
||||
placement:
|
||||
constraints: [node.role == worker]
|
||||
|
||||
networks:
|
||||
{{ conductor_network }}:
|
||||
{% if init_db %}
|
||||
configs:
|
||||
db-init:
|
||||
file: {{ target_path }}/conductor-db-init.sql
|
||||
{% endif %}
|
|
@ -1,6 +1,15 @@
|
|||
---
|
||||
conductor_workers_server: http://conductor-dev.int.d4science.net/api
|
||||
|
||||
conductor_workers: [ { service: 'base', image: 'nubisware/nubisware-conductor-worker-py-base', replicas: 2, threads: 1, pollrate: 1 }]
|
||||
|
||||
pymail_server: "smtp-relay.d4science.org"
|
||||
pymail_user: "conductor_{{ infrastructure }}"
|
||||
pymail_protocol: "starttls"
|
||||
pymail_port: "587"
|
||||
|
||||
#smtp_local_pwd: ""
|
||||
#smtp_dev_pwd: in vault
|
||||
#smtp_pre_pwd: in vault
|
||||
#smtp_prod_pwd: in vault
|
||||
|
||||
#{service: 'provisioning', image: 'nubisware/nubisware-conductor-worker-py-provisioning', replicas: 2, threads: 1, pollrate: 1 }
|
||||
|
|
|
@ -0,0 +1,10 @@
|
|||
$ANSIBLE_VAULT;1.1;AES256
|
||||
62323839306636626530646263356365643863653430393837343037643461666230333037383239
|
||||
6266363838393538643739393765656165613161396236330a323834623936373933643335306163
|
||||
33323739663463326265613663363132383364336432646237666466663061393631623239306266
|
||||
6363396363326364310a376362313934653933613939313463653865363538363935333866366164
|
||||
36373062353631356632356230316535616666633265326136343061303962633163393264316431
|
||||
31623730623764363763633939373963333333343731376466613437386264653461616263306530
|
||||
63663032653030643239643830346631303766393136363337626635633664353635363161313562
|
||||
63623733613039646465386434396238336637626632616566323734303362653633373936393532
|
||||
3665
|
|
@ -4,7 +4,8 @@ services:
|
|||
{% for workers in conductor_workers %}
|
||||
{{ workers.service }}:
|
||||
environment:
|
||||
CONDUCTOR_SERVER: {{ conductor_workers_server }}
|
||||
CONDUCTOR_SERVER: {{ conductor_service_url }}
|
||||
CONDUCTOR_HEALTH: {{ conductor_service_health_url }}
|
||||
configs:
|
||||
- source: {{workers.service}}-config
|
||||
target: /app/config.cfg
|
||||
|
@ -14,12 +15,13 @@ services:
|
|||
deploy:
|
||||
mode: replicated
|
||||
replicas: {{ workers.replicas }}
|
||||
{% if infrastructure != 'local' %}
|
||||
placement:
|
||||
constraints: [node.role == worker]
|
||||
{% endif %}
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
window: 120s
|
||||
logging:
|
||||
driver: "journald"
|
||||
|
|
|
@ -1,9 +1,16 @@
|
|||
[common]
|
||||
loglevel = {{ item.get('loglevel', 'info') }}
|
||||
#server =
|
||||
threads = 3
|
||||
pollrate = .1
|
||||
threads = 1
|
||||
pollrate = 1
|
||||
{% if "domain" in item.keys() %}
|
||||
domain={{ item.domain }}
|
||||
{% endif %}
|
||||
|
||||
[pymail]
|
||||
server = {{ pymail_server}}
|
||||
user = {{ pymail_user }}
|
||||
password = {{ pymail_password }}
|
||||
protocol = {{ pymail_protocol }}
|
||||
port = {{ pymail_port }}
|
||||
|
||||
|
|
85
run.sh
85
run.sh
|
@ -1,85 +0,0 @@
|
|||
#!/bin/bash
|
||||
#
|
||||
# The "directory/directory.yml" is the old way that we used to simplify jobs execution.
|
||||
# The "directory/site.yml" is the syntax used by roles (from ansible version 1.2)
|
||||
#
|
||||
# Otherwise we can directly execute a single play (file)
|
||||
#
|
||||
|
||||
PAR=50
|
||||
TIMEOUT=15
|
||||
PLAY=site.yml
|
||||
HOSTS_DIR=.
|
||||
ANSIBLE_HOSTS=
|
||||
|
||||
export TMPDIR=/var/tmp/${USER}
|
||||
if [ ! -d ${TMPDIR} ] ; then
|
||||
mkdir -p ${TMPDIR}
|
||||
fi
|
||||
|
||||
if [ -f ./ansible.cfg ] ; then
|
||||
export ANSIBLE_CONFIG="./ansible.cfg"
|
||||
fi
|
||||
|
||||
# No cows!
|
||||
export ANSIBLE_NOCOWS=1
|
||||
|
||||
export ANSIBLE_ERROR_ON_UNDEFINED_VARS=True
|
||||
export ANSIBLE_HOST_KEY_CHECKING=False
|
||||
export ANSIBLE_LIBRARY="/usr/share/ansible:./modules:../modules:$ANSIBLE_LIBRARY"
|
||||
|
||||
# Update the galaxy requirements
|
||||
if [ -f requirements.yml ] ; then
|
||||
ansible-galaxy install --ignore-errors -f -r requirements.yml
|
||||
fi
|
||||
|
||||
PLAY_OPTS="-T $TIMEOUT -f $PAR"
|
||||
|
||||
if [ -f "$1" ] ; then
|
||||
PLAY=$1
|
||||
elif [ ! -f $PLAY ] ; then
|
||||
echo "No play file available."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -f "${PLAY}" ] ; then
|
||||
MAIN="${PLAY}"
|
||||
shift
|
||||
elif [ -f "${PLAY}.yml" ]; then
|
||||
MAIN="${PLAY}.yml"
|
||||
shift
|
||||
fi
|
||||
|
||||
if [ -f ${HOSTS_DIR}/hosts ] ; then
|
||||
ANSIBLE_HOSTS=${HOSTS_DIR}/hosts
|
||||
fi
|
||||
if [ -f ${HOSTS_DIR}/inventory/hosts ] ; then
|
||||
ANSIBLE_HOSTS=${HOSTS_DIR}/inventory/hosts
|
||||
fi
|
||||
if [ ! -z "$ANSIBLE_HOSTS" ] ; then
|
||||
PLAY_OPTS="-i $ANSIBLE_HOSTS"
|
||||
fi
|
||||
|
||||
#echo "Find vault encrypted files if any"
|
||||
if [ -d ./group_vars ] ; then
|
||||
VAULT_GROUP_FILES=$( find ./group_vars -name \*vault\* )
|
||||
fi
|
||||
if [ -d ./host_vars ] ; then
|
||||
VAULT_HOST_FILES=$( find ./host_vars -name \*vault\* )
|
||||
fi
|
||||
|
||||
if [ -n "$VAULT_GROUP_FILES" ] || [ -n "$VAULT_HOST_FILES" ] ; then
|
||||
# Vault requires a password.
|
||||
# To encrypt a password for a user: python -c "from passlib.hash import sha512_crypt; print sha512_crypt.encrypt('<password>')"
|
||||
if [ -f ~/.conductor_ansible_vault_pass.txt ] ; then
|
||||
PLAY_OPTS="$PLAY_OPTS --vault-password-file=~/.conductor_ansible_vault_pass.txt"
|
||||
else
|
||||
echo "There are password protected encrypted files, we will ask for password before proceeding"
|
||||
PLAY_OPTS="$PLAY_OPTS --ask-vault-pass"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Main
|
||||
ansible-playbook $PLAY_OPTS $MAIN $@
|
||||
|
||||
rm -f /tmp/passwordfile
|
|
@ -0,0 +1,67 @@
|
|||
---
|
||||
- hosts: dev_infra
|
||||
#- hosts: localhost
|
||||
vars_files:
|
||||
- roles/workers/defaults/smtp.yaml
|
||||
- roles/pep/defaults/pep_credentials.yaml
|
||||
- roles/conductor/defaults/conductor_ui_secrets.yaml
|
||||
vars:
|
||||
infrastructure: "dev"
|
||||
pymail_password: "{{ smtp_dev_pwd }}"
|
||||
iam_host: https://accounts.dev.d4science.org
|
||||
pep: True
|
||||
pep_replicas: 2
|
||||
pep_credentials: "{{ dev_pep_credentials }}"
|
||||
ha_network: True
|
||||
conductor_ui_secret: "{{ dev_conductor_ui_secret }}"
|
||||
conductor_auth: oauth2
|
||||
conductor_server_name: conductor.dev.d4science.org
|
||||
conductor_ui_server_name: conductor-ui.dev.d4science.org
|
||||
conductor_ui_public_url: "https://{{ conductor_ui_server_name }}"
|
||||
conductor_replicas: 1
|
||||
conductor_ui_replicas: 2
|
||||
|
||||
roles:
|
||||
- common
|
||||
- databases
|
||||
- conductor
|
||||
- workers
|
||||
- pep
|
||||
tasks:
|
||||
- name: Start {{ db|default('postgres', true) }} and es
|
||||
docker_stack:
|
||||
name: 'conductor-{{ infrastructure }}'
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/{{ db|default('postgres', true) }}-swarm.yaml"
|
||||
- "{{ target_path }}/elasticsearch-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Waiting for databases
|
||||
pause:
|
||||
seconds: 20
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start conductor
|
||||
docker_stack:
|
||||
name: "conductor-{{ infrastructure }}"
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/conductor-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start pep
|
||||
docker_stack:
|
||||
name: 'conductor-{{ infrastructure }}'
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/pep-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start workers
|
||||
docker_stack:
|
||||
name: "conductor-{{ infrastructure }}"
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/conductor-workers-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
- hosts: localhost
|
||||
vars_files:
|
||||
- roles/workers/defaults/smtp.yaml
|
||||
vars:
|
||||
infrastructure: "local"
|
||||
pymail_password: "{{ smtp_local_pwd }}"
|
||||
smtp_local_pwd: ""
|
||||
roles:
|
||||
- common
|
||||
- databases
|
||||
- conductor
|
||||
- workers
|
||||
- pep
|
||||
tasks:
|
||||
- name: Start {{ db|default('postgres', true) }} and es
|
||||
docker_stack:
|
||||
name: 'conductor-{{ infrastructure }}'
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/{{ db|default('postgres', true) }}-swarm.yaml"
|
||||
- "{{ target_path }}/elasticsearch-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Waiting for databases
|
||||
pause:
|
||||
seconds: 20
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start conductor
|
||||
docker_stack:
|
||||
name: "conductor-{{ infrastructure }}"
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/conductor-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start pep
|
||||
docker_stack:
|
||||
name: 'conductor-{{ infrastructure }}'
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/pep-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start workers
|
||||
docker_stack:
|
||||
name: "conductor-{{ infrastructure }}"
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/conductor-workers-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
|
@ -0,0 +1,64 @@
|
|||
---
|
||||
- hosts: nw_cluster_infra
|
||||
#- hosts: localhost
|
||||
vars_files:
|
||||
- roles/workers/defaults/smtp.yaml
|
||||
- roles/pep/defaults/pep_credentials.yaml
|
||||
- roles/conductor/defaults/conductor_ui_secrets.yaml
|
||||
vars:
|
||||
infrastructure: "nw-cluster"
|
||||
pymail_password: "{{ smtp_dev_pwd }}"
|
||||
iam_host: https://accounts.dev.d4science.org
|
||||
pep: True
|
||||
pep_credentials: "{{ nw_cluster_pep_credentials }}"
|
||||
conductor_ui_secret: "{{ nw_cluster_conductor_ui_secret }}"
|
||||
conductor_auth: oauth2
|
||||
conductor_server_name: conductor-dev.int.d4science.net
|
||||
conductor_ui_server_name: conductorui-dev.int.d4science.net
|
||||
conductor_ui_public_url: "http://{{ conductor_ui_server_name }}"
|
||||
conductor_replicas: 1
|
||||
conductor_ui_replicas: 2
|
||||
roles:
|
||||
- common
|
||||
- databases
|
||||
- conductor
|
||||
- workers
|
||||
- pep
|
||||
tasks:
|
||||
- name: Start {{ db|default('postgres', true) }} and es
|
||||
docker_stack:
|
||||
name: 'conductor-{{ infrastructure }}'
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/{{ db|default('postgres', true) }}-swarm.yaml"
|
||||
- "{{ target_path }}/elasticsearch-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Waiting for databases
|
||||
pause:
|
||||
seconds: 20
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start conductor
|
||||
docker_stack:
|
||||
name: "conductor-{{ infrastructure }}"
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/conductor-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start pep
|
||||
docker_stack:
|
||||
name: 'conductor-{{ infrastructure }}'
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/pep-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start workers
|
||||
docker_stack:
|
||||
name: "conductor-{{ infrastructure }}"
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/conductor-workers-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
|
@ -0,0 +1,65 @@
|
|||
---
|
||||
- hosts: pre_infra
|
||||
vars_files:
|
||||
- roles/workers/defaults/smtp.yaml
|
||||
- roles/pep/defaults/pep_credentials.yaml
|
||||
- roles/conductor/defaults/conductor_ui_secrets.yaml
|
||||
vars:
|
||||
infrastructure: "pre"
|
||||
pymail_password: "{{ smtp_pre_pwd }}"
|
||||
iam_host: https://accounts.pre.d4science.org
|
||||
pep: True
|
||||
pep_replicas: 2
|
||||
pep_credentials: "{{ pre_pep_credentials }}"
|
||||
ha_network: True
|
||||
conductor_ui_secret: "{{ pre_conductor_ui_secret }}"
|
||||
conductor_auth: oauth2
|
||||
conductor_server_name: conductor.pre.d4science.org
|
||||
conductor_ui_server_name: conductor-ui.pre.d4science.org
|
||||
conductor_ui_public_url: "https://{{ conductor_ui_server_name }}"
|
||||
conductor_replicas: 1
|
||||
conductor_ui_replicas: 2
|
||||
roles:
|
||||
- common
|
||||
- databases
|
||||
- conductor
|
||||
- workers
|
||||
- pep
|
||||
tasks:
|
||||
- name: Start {{ db|default('postgres', true) }} and es
|
||||
docker_stack:
|
||||
name: 'conductor-{{ infrastructure }}'
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/{{ db|default('postgres', true) }}-swarm.yaml"
|
||||
- "{{ target_path }}/elasticsearch-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Waiting for databases
|
||||
pause:
|
||||
seconds: 20
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start conductor
|
||||
docker_stack:
|
||||
name: "conductor-{{ infrastructure }}"
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/conductor-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start pep
|
||||
docker_stack:
|
||||
name: 'conductor-{{ infrastructure }}'
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/pep-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start workers
|
||||
docker_stack:
|
||||
name: "conductor-{{ infrastructure }}"
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/conductor-workers-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
|
@ -0,0 +1,65 @@
|
|||
---
|
||||
- hosts: prod_infra
|
||||
vars_files:
|
||||
- roles/external-postgres/defaults/vault_main.yaml
|
||||
- roles/workers/defaults/smtp.yaml
|
||||
- roles/pep/defaults/pep_credentials.yaml
|
||||
- roles/conductor/defaults/conductor_ui_secrets.yaml
|
||||
vars:
|
||||
infrastructure: "prod"
|
||||
pymail_password: "{{ smtp_prod_pwd }}"
|
||||
iam_host: https://accounts.d4science.org
|
||||
pep: True
|
||||
pep_credentials: "{{ prod_pep_credentials }}"
|
||||
ha_network: True
|
||||
conductor_ui_secret: "{{ prod_conductor_ui_secret }}"
|
||||
conductor_auth: oauth2
|
||||
conductor_server_name: conductor.d4science.org
|
||||
conductor_ui_server_name: conductor-ui.d4science.org
|
||||
conductor_ui_public_url: "https://{{ conductor_ui_server_name }}"
|
||||
conductor_replicas: 1
|
||||
conductor_ui_replicas: 2
|
||||
roles:
|
||||
- common
|
||||
- elasticsearch
|
||||
- external-postgres
|
||||
- conductor
|
||||
- workers
|
||||
- pep
|
||||
tasks:
|
||||
- name: Start es
|
||||
docker_stack:
|
||||
name: "conductor-{{ infrastructure }}"
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/elasticsearch-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Waiting for databases
|
||||
pause:
|
||||
seconds: 5
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start conductor
|
||||
docker_stack:
|
||||
name: "conductor-{{ infrastructure }}"
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/conductor-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start pep
|
||||
docker_stack:
|
||||
name: 'conductor-{{ infrastructure }}'
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/pep-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start workers
|
||||
docker_stack:
|
||||
name: "conductor-{{ infrastructure }}"
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/conductor-workers-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
56
site.yaml
56
site.yaml
|
@ -1,56 +0,0 @@
|
|||
---
|
||||
- hosts: pre_infra:dev_infra
|
||||
roles:
|
||||
- common
|
||||
- role: cluster-replacement
|
||||
when:
|
||||
- cluster_replacement is defined and cluster_replacement|bool
|
||||
- role: databases
|
||||
- conductor
|
||||
- role: workers
|
||||
when:
|
||||
- no_workers is not defined or not no_workers|bool
|
||||
tasks:
|
||||
- name: Start {{ db|default('postgres', true) }} and es
|
||||
docker_stack:
|
||||
name: 'conductor-{{ infrastructure }}'
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/{{ db|default('postgres', true) }}-swarm.yaml"
|
||||
- "{{ target_path }}/elasticsearch-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Waiting for databases
|
||||
pause:
|
||||
seconds: 10
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start conductor
|
||||
docker_stack:
|
||||
name: 'conductor-{{ infrastructure }}'
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/conductor-swarm.yaml"
|
||||
when: dry is not defined or not dry|bool
|
||||
|
||||
- name: Start haproxy
|
||||
docker_stack:
|
||||
name: 'conductor-{{ infrastructure }}'
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/haproxy-swarm.yaml"
|
||||
when:
|
||||
- dry is not defined or not dry|bool
|
||||
- cluster_replacement is defined
|
||||
- cluster_replacement|bool
|
||||
|
||||
- name: Start workers
|
||||
docker_stack:
|
||||
name: 'conductor-{{ infrastructure }}'
|
||||
state: present
|
||||
compose:
|
||||
- "{{ target_path }}/conductor-workers-swarm.yaml"
|
||||
when:
|
||||
- dry is not defined or not dry|bool
|
||||
- no_workers is not defined or not no_workers|bool
|
||||
|
|
@ -0,0 +1,21 @@
|
|||
FROM nginx:alpine
|
||||
|
||||
LABEL maintainer="Nubisware <info@nubisware.com>"
|
||||
|
||||
# Bake common configurations for Conductor PEP
|
||||
COPY config/nginx/nginx.conf /etc/nginx/nginx.conf
|
||||
COPY config/nginx/pep.js /etc/nginx/pep.js
|
||||
COPY config/nginx/config.js /etc/nginx/config.js
|
||||
|
||||
# Ensure that cache is invalidated
|
||||
ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache
|
||||
|
||||
# Copy compiled UI assets to nginx www directory
|
||||
WORKDIR /usr/share/nginx/html
|
||||
RUN rm -rf ./*
|
||||
COPY build/ .
|
||||
|
||||
# Copy NGINX default configuration
|
||||
COPY default.conf /etc/nginx/conf.d/default.conf
|
||||
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
FROM nubisware/conductor-frontend:common
|
||||
|
||||
LABEL maintainer="Nubisware <info@nubisware.com>"
|
||||
|
||||
# Ensure that cache is invalidated
|
||||
ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache
|
||||
|
||||
# Copy compiled UI assets to nginx www directory
|
||||
WORKDIR /usr/share/nginx/html
|
||||
RUN rm -rf ./*
|
||||
COPY build/ .
|
||||
|
||||
# Copy NGINX default configuration
|
||||
COPY ./config.dev/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
FROM nubisware/conductor-frontend:common
|
||||
|
||||
LABEL maintainer="Nubisware <info@nubisware.com>"
|
||||
|
||||
# Ensure that cache is invalidated
|
||||
ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache
|
||||
|
||||
# Copy compiled UI assets to nginx www directory
|
||||
WORKDIR /usr/share/nginx/html
|
||||
RUN rm -rf ./*
|
||||
COPY build/ .
|
||||
|
||||
# Copy NGINX default configuration
|
||||
COPY ./config.pre/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
FROM nubisware/conductor-frontend:common
|
||||
|
||||
LABEL maintainer="Nubisware <info@nubisware.com>"
|
||||
|
||||
# Ensure that cache is invalidated
|
||||
ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache
|
||||
|
||||
# Copy compiled UI assets to nginx www directory
|
||||
WORKDIR /usr/share/nginx/html
|
||||
RUN rm -rf ./*
|
||||
COPY build/ .
|
||||
|
||||
# Copy NGINX default configuration
|
||||
COPY ./config.prod/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
#
|
||||
|
||||
# ===========================================================================================================
|
||||
# 0. Builder stage
|
||||
# ===========================================================================================================
|
||||
FROM eclipse-temurin:11-jdk-focal AS builder
|
||||
|
||||
LABEL maintainer="Nubisware SRL"
|
||||
|
||||
# Copy the project directly onto the image
|
||||
COPY ./conductor-community /conductor
|
||||
COPY build.gradle /conductor/community-server/
|
||||
WORKDIR /conductor
|
||||
|
||||
# Build the server on run
|
||||
RUN ./gradlew generateLock updateLock saveLock
|
||||
RUN ./gradlew build -x test --stacktrace
|
||||
|
||||
# ===========================================================================================================
|
||||
# 1. Bin stage
|
||||
# ===========================================================================================================
|
||||
FROM eclipse-temurin:11-jre-focal
|
||||
|
||||
LABEL maintainer="Nubisware SRL"
|
||||
|
||||
# Make app folders
|
||||
RUN mkdir -p /app/config /app/logs /app/libs
|
||||
|
||||
# Copy the compiled output to new image
|
||||
COPY --from=builder /conductor/community-server/build/libs/conductor-community-server-*-SNAPSHOT-boot.jar /app/libs/conductor-server.jar
|
||||
COPY ./config.properties /app/config.properties
|
||||
COPY startup.sh /app/
|
||||
RUN chmod +x /app/startup.sh
|
||||
|
||||
HEALTHCHECK --interval=60s --timeout=30s --retries=10 CMD curl -I -XGET http://localhost:8080/health || exit 1
|
||||
|
||||
CMD [ "/app/startup.sh" ]
|
||||
ENTRYPOINT [ "/bin/sh"]
|
|
@ -0,0 +1,24 @@
|
|||
ln -s config.dev/config-pg-es7.properties config.properties
|
||||
docker build -t nubisware/conductor-server:3.13.6-dev -f Dockerfile-server .
|
||||
docker push nubisware/conductor-server:3.13.6-dev
|
||||
unlink config.properties
|
||||
|
||||
# Override fetch plugin with one that uses d4s-boot secure fetch
|
||||
#cp config/fetch.js conductor/ui/src/plugins/fetch.js
|
||||
|
||||
# Override root App with one instantiating d4s-boot configured for dev
|
||||
#cp config.dev/App.jsx conductor/ui/src/App.jsx
|
||||
|
||||
# jump to ui code and build
|
||||
#cd conductor/ui/
|
||||
#yarn install && yarn build
|
||||
#cd -
|
||||
|
||||
# copy the built app to local folder and build Docker image. The clean up.
|
||||
#cp -r conductor/ui/build .
|
||||
#ln -s config.dev/nginx/conf.d/default.conf default.conf
|
||||
#docker build -t nubisware/conductor-frontend:3.13.6-dev -f Dockerfile-frontend .
|
||||
#rm -rf build
|
||||
#unlink default.conf
|
||||
|
||||
#docker push nubisware/conductor-frontend:dev
|
|
@ -0,0 +1,12 @@
|
|||
docker build -t nubisware/conductor-server3:pre -f Dockerfile-server-pre .
|
||||
docker push nubisware/conductor-server3:pre
|
||||
|
||||
#docker build -t nubisware/conductor-frontend:common -f Dockerfile-frontend .
|
||||
|
||||
#cd /home/lettere/git/conductor/ui/
|
||||
#./build-pre-code.sh
|
||||
#cd -
|
||||
#cp -r /home/lettere/git/conductor/ui/build .
|
||||
#docker build -t nubisware/conductor-frontend:pre -f Dockerfile-frontend-pre .
|
||||
#rm -rf build
|
||||
#docker push nubisware/conductor-frontend:pre
|
|
@ -0,0 +1,4 @@
|
|||
git clone https://github.com/Netflix/conductor
|
||||
git clone https://github.com/Netflix/conductor-community
|
||||
|
||||
find conductor-community/ -name dependencies.lock -exec rm -v {} \;
|
|
@ -0,0 +1,12 @@
|
|||
docker build -t nubisware/conductor-server3:prod -f Dockerfile-server-prod .
|
||||
docker push nubisware/conductor-server3:prod
|
||||
|
||||
#docker build -t nubisware/conductor-frontend:common -f Dockerfile-frontend .
|
||||
|
||||
#cd /home/lettere/git/conductor/ui/
|
||||
#./build-prod-code.sh
|
||||
#cd -
|
||||
#cp -r /home/lettere/git/conductor/ui/build .
|
||||
#docker build -t nubisware/conductor-frontend:prod -f Dockerfile-frontend-prod .
|
||||
#rm -rf build
|
||||
#docker push nubisware/conductor-frontend:prod
|
|
@ -0,0 +1,72 @@
|
|||
plugins {
|
||||
id 'org.springframework.boot'
|
||||
}
|
||||
|
||||
dependencies {
|
||||
implementation "com.netflix.conductor:conductor-rest:${revConductor}"
|
||||
implementation "com.netflix.conductor:conductor-core:${revConductor}"
|
||||
implementation "com.netflix.conductor:conductor-redis-persistence:${revConductor}"
|
||||
implementation "com.netflix.conductor:conductor-cassandra-persistence:${revConductor}"
|
||||
|
||||
implementation "com.netflix.conductor:conductor-grpc-server:${revConductor}"
|
||||
implementation "com.netflix.conductor:conductor-redis-lock:${revConductor}"
|
||||
implementation "com.netflix.conductor:conductor-redis-concurrency-limit:${revConductor}"
|
||||
|
||||
|
||||
implementation "com.netflix.conductor:conductor-http-task:${revConductor}"
|
||||
implementation "com.netflix.conductor:conductor-json-jq-task:${revConductor}"
|
||||
implementation "com.netflix.conductor:conductor-awss3-storage:${revConductor}"
|
||||
implementation "com.netflix.conductor:conductor-awssqs-event-queue:${revConductor}"
|
||||
|
||||
implementation project(':event-queue:conductor-amqp')
|
||||
implementation project(':event-queue:conductor-nats')
|
||||
implementation project(':index:conductor-es7-persistence')
|
||||
implementation project(':external-payload-storage:conductor-azureblob-storage')
|
||||
implementation project(':external-payload-storage:conductor-postgres-external-storage')
|
||||
|
||||
implementation project(':lock:conductor-zookeeper-lock')
|
||||
|
||||
implementation project(':conductor-metrics')
|
||||
|
||||
implementation project(':persistence:conductor-common-persistence')
|
||||
implementation project(':persistence:conductor-postgres-persistence')
|
||||
implementation project(':persistence:conductor-mysql-persistence')
|
||||
|
||||
implementation project(':task:conductor-kafka')
|
||||
|
||||
implementation project(':conductor-workflow-event-listener')
|
||||
|
||||
implementation 'org.springframework.boot:spring-boot-starter'
|
||||
implementation 'org.springframework.boot:spring-boot-starter-validation'
|
||||
implementation 'org.springframework.boot:spring-boot-starter-web'
|
||||
implementation 'org.springframework.retry:spring-retry'
|
||||
|
||||
implementation 'org.springframework.boot:spring-boot-starter-log4j2'
|
||||
implementation 'org.apache.logging.log4j:log4j-web'
|
||||
|
||||
implementation 'org.springframework.boot:spring-boot-starter-actuator'
|
||||
|
||||
implementation "org.springdoc:springdoc-openapi-ui:${revOpenapi}"
|
||||
|
||||
runtimeOnly "org.glassfish.jaxb:jaxb-runtime:${revJAXB}"
|
||||
|
||||
testImplementation "com.netflix.conductor:conductor-rest:${revConductor}"
|
||||
testImplementation "com.netflix.conductor:conductor-common:${revConductor}"
|
||||
testImplementation "io.grpc:grpc-testing:${revGrpc}"
|
||||
testImplementation "com.google.protobuf:protobuf-java:${revProtoBuf}"
|
||||
testImplementation "io.grpc:grpc-protobuf:${revGrpc}"
|
||||
testImplementation "io.grpc:grpc-stub:${revGrpc}"
|
||||
}
|
||||
|
||||
jar {
|
||||
enabled = true
|
||||
}
|
||||
|
||||
bootJar {
|
||||
mainClass = 'com.netflix.conductor.Conductor'
|
||||
classifier = 'boot'
|
||||
}
|
||||
|
||||
springBoot {
|
||||
buildInfo()
|
||||
}
|
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"files": {
|
||||
"main.css": "/static/css/main.98e59355.css",
|
||||
"main.js": "/static/js/main.18fa60f5.js",
|
||||
"index.html": "/index.html",
|
||||
"main.98e59355.css.map": "/static/css/main.98e59355.css.map",
|
||||
"main.18fa60f5.js.map": "/static/js/main.18fa60f5.js.map"
|
||||
},
|
||||
"entrypoints": [
|
||||
"static/css/main.98e59355.css",
|
||||
"static/js/main.18fa60f5.js"
|
||||
]
|
||||
}
|
|
@ -0,0 +1,52 @@
|
|||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<svg
|
||||
viewBox="0 0 419.77176 434.76002"
|
||||
version="1.1"
|
||||
id="svg134"
|
||||
sodipodi:docname="favicon.svg"
|
||||
width="419.77176"
|
||||
height="434.76001"
|
||||
inkscape:version="1.1.2 (b8e25be8, 2022-02-05)"
|
||||
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
|
||||
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
xmlns:svg="http://www.w3.org/2000/svg">
|
||||
<sodipodi:namedview
|
||||
id="namedview136"
|
||||
pagecolor="#ffffff"
|
||||
bordercolor="#666666"
|
||||
borderopacity="1.0"
|
||||
inkscape:pageshadow="2"
|
||||
inkscape:pageopacity="0.0"
|
||||
inkscape:pagecheckerboard="0"
|
||||
showgrid="false"
|
||||
inkscape:zoom="1.2341567"
|
||||
inkscape:cx="208.64449"
|
||||
inkscape:cy="217.15233"
|
||||
inkscape:window-width="1296"
|
||||
inkscape:window-height="932"
|
||||
inkscape:window-x="2544"
|
||||
inkscape:window-y="454"
|
||||
inkscape:window-maximized="0"
|
||||
inkscape:current-layer="svg134" />
|
||||
<defs
|
||||
id="defs124">
|
||||
<style
|
||||
id="style122">.cls-1{fill:none;}.cls-2{fill:#1976d2;}</style>
|
||||
</defs>
|
||||
<rect
|
||||
class="cls-1"
|
||||
width="565"
|
||||
height="570.42999"
|
||||
id="rect126"
|
||||
x="-73.398232"
|
||||
y="-67.82" />
|
||||
<path
|
||||
class="cls-2"
|
||||
d="m 384.31177,242.99 -59.55,103.19 a 13.52,13.52 0 0 1 -11.67,6.73 h -19.95 l 63.46,-109.92 h -35.47 l -63.45,109.88 h -85.46 a 13.49,13.49 0 0 1 -11.62,-6.69 l -70.490004,-122 a 13.54,13.54 0 0 1 0,-13.48 l 70.450004,-122 a 13.51,13.51 0 0 1 11.67,-6.73 h 67.67 l -13.3,-23.16 a 54.43,54.43 0 0 0 -5.55,-7.6 h -48.83 a 44.3,44.3 0 0 0 -38.26,22.09 l -70.440004,122 a 44.29,44.29 0 0 0 0,44.18 l 70.430004,122 a 44.31,44.31 0 0 0 38.27,22.1 h 140.87 a 44.3,44.3 0 0 0 38.26,-22.09 l 68.42,-118.5 z"
|
||||
id="path128" />
|
||||
<path
|
||||
class="cls-2"
|
||||
d="m 218.88177,398.93 a 55.89,55.89 0 0 1 -23.16,5.12 h -33.54 a 56.31,56.31 0 0 1 -48.58,-28.07 L 38.211766,245.47 a 56.3,56.3 0 0 1 0,-56.14 L 113.60177,58.81 a 56.29,56.29 0 0 1 48.62,-28.07 h 33.54 a 56.28,56.28 0 0 1 48.62,28.07 l 76.79,133 h 35.43 l -63.46,-109.89 h 19.95 a 13.52,13.52 0 0 1 11.67,6.73 l 59.55,103.16 h 35.46 l -68.41,-118.5 a 44.31,44.31 0 0 0 -38.27,-22.13 h -37.68 l -4.47,-7.76 A 87.11,87.11 0 0 0 195.72177,0 h -33.54 A 87.1,87.1 0 0 0 86.971766,43.42 l -75.37,130.55 a 87.07,87.07 0 0 0 0,86.85 l 75.35,130.52 a 87.1,87.1 0 0 0 75.210004,43.42 h 33.54 a 87,87 0 0 0 70.12,-35.83 z"
|
||||
id="path130" />
|
||||
</svg>
|
After Width: | Height: | Size: 2.4 KiB |
|
@ -0,0 +1 @@
|
|||
<!doctype html><html lang="en"><head><meta charset="utf-8"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><title>Conductor UI</title><script defer="defer" src="/static/js/main.18fa60f5.js"></script><link href="/static/css/main.98e59355.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
|
|
@ -0,0 +1 @@
|
|||
<svg version="1.1" id="svg39" width="874.922" height="185.4" xmlns="http://www.w3.org/2000/svg"><defs id="defs11"><style id="style9">.cls-2{fill:#242a36}.cls-3{fill:#1976d2}</style></defs><g id="Layer_2" data-name="Layer 2" transform="translate(-73.828 -68)"><path id="rect13" style="fill:none" d="M0 0h1022.01v320.7H0z"/><path class="cls-2" d="M433.91 140.05q-14.78 0-25 9.56t-10.22 24.58q0 15 10.22 24.65 10.22 9.65 25 9.62a34.71 34.71 0 0 0 24.78-9.62q10.14-9.63 10.15-24.65.01-15.02-10.09-24.58-10.08-9.55-24.84-9.56zm15.15 50.29a21.72 21.72 0 0 1-30.51 0q-6.39-6.27-6.39-16.15 0-9.88 6.39-16.19a21.72 21.72 0 0 1 30.51 0q6.46 6.27 6.46 16.15 0 9.88-6.46 16.19z" id="path15"/><path class="cls-2" d="M515.55 139.66q-14.37 0-22.67 9.88v-8h-13.71v65.38h13.71v-30.23q0-12 5.21-18.18t14.3-6.2a14.28 14.28 0 0 1 11.27 4.88q4.29 4.88 4.28 12.91v36.78h13.84v-39.54q0-12.27-7.25-20t-18.98-7.68z" id="path17"/><path class="cls-2" d="M607 149q-9.09-9.34-23.72-9.35a31 31 0 0 0-22.8 9.75q-9.63 9.75-9.62 24.78.01 15.03 9.59 24.82a31 31 0 0 0 22.8 9.75q14.5 0 23.72-9.22v7.38h13.85V112H607Zm-6.45 41.39a21.06 21.06 0 0 1-15 6.2 20.48 20.48 0 0 1-15.16-6.13q-6.06-6.14-6.06-16.28t6-16.18a20.35 20.35 0 0 1 15.16-6.2 21 21 0 0 1 15 6.26q6.45 6.27 6.45 16.15 0 9.88-6.42 16.19z" id="path19"/><path class="cls-2" d="M682.64 171.68q0 12-5.14 18.2-5.14 6.2-14.23 6.19a14.35 14.35 0 0 1-11.4-4.94q-4.29-4.95-4.28-13V141.5h-13.71v39.41q0 12.39 7.18 20.1 7.18 7.71 19.05 7.71 14.24 0 22.53-9.88v8h13.71V141.5h-13.71z" id="path21"/><path class="cls-2" d="M860.21 140.05q-14.76 0-25 9.56T825 174.19q0 15 10.22 24.65 10.22 9.65 25 9.62a34.67 34.67 0 0 0 24.78-9.62q10.15-9.63 10.15-24.65 0-15.02-10.08-24.58-10.07-9.55-24.86-9.56zm15.16 50.29a21.72 21.72 0 0 1-30.51 0q-6.39-6.27-6.39-16.15 0-9.88 6.39-16.19a21.72 21.72 0 0 1 30.51 0q6.46 6.27 6.46 16.15 0 9.88-6.46 16.19z" id="path23"/><path class="cls-2" d="M944.1 140.71q-15.95 0-24.91 14.76v-14h-13.71v65.38h13.71v-23.04q0-13.83 6.33-21.68t18.48-7.84a36 36 0 0 1 3.82.13l.93-13.18a16.36 16.36 0 0 0-4.65-.53z" id="path25"/><path class="cls-2" d="M347.63 130.18a30.08 30.08 0 0 1 21 9.28 31.21 31.21 0 0 1 7.7 13.48h14.73a44.39 44.39 0 0 0-87.87 9.78q0 19.38 13.38 32.69a43 43 0 0 0 62.07 0 43.7 43.7 0 0 0 12.28-22.27h-14.75a31.37 31.37 0 0 1-7.52 12.86 28.48 28.48 0 0 1-42 0q-9.15-9.44-9.16-23.27-.01-13.83 9.16-23.26a30.08 30.08 0 0 1 20.98-9.29z" id="path27"/><path class="cls-2" d="M740.55 152.31a20.21 20.21 0 0 1 19.85 14.1h13.91a31.94 31.94 0 0 0-9.31-17 35.54 35.54 0 0 0-48.91 0Q706 159.16 706 174.19q0 15.03 10.09 24.81a35.7 35.7 0 0 0 48.91 0 31.68 31.68 0 0 0 9.23-16.67h-13.88a19.45 19.45 0 0 1-4.65 7.71 20.38 20.38 0 0 1-15.15 6q-9.24 0-15.16-6t-5.93-15.88q0-9.62 6-15.75a20.23 20.23 0 0 1 15.09-6.1z" id="path29"/><path class="cls-2" d="M808.24 195.81a10.32 10.32 0 0 1-7.91-3.1q-2.9-3.11-2.9-9v-30.87h22.41V141.5h-22.41v-20.3h-13.7v63.4q0 11.86 6.32 18 6.32 6.14 17.27 6.13 7.77 0 16.21-5l-4.22-11.46a19.91 19.91 0 0 1-11.07 3.54z" id="path31"/><path class="cls-3" d="m237.69 171.63-25.39 44a5.75 5.75 0 0 1-5 2.88h-8.51l27.06-46.86h-15.1l-27.06 46.86h-36.44a5.76 5.76 0 0 1-5-2.88l-30-52a5.74 5.74 0 0 1 0-5.75l30-52a5.75 5.75 0 0 1 5-2.87h28.86l-5.69-9.86a23 23 0 0 0-2.37-3.24h-20.8a18.88 18.88 0 0 0-16.31 9.42l-30 52a18.87 18.87 0 0 0 0 18.84l30 52a18.91 18.91 0 0 0 16.32 9.42h60.07a18.91 18.91 0 0 0 16.32-9.42l29.17-50.53z" id="path33"/><path class="cls-3" d="M167.15 238.13a23.94 23.94 0 0 1-9.88 2.18H143a24 24 0 0 1-20.73-12l-32.16-55.62a24 24 0 0 1 0-23.94l32.13-55.66a24 24 0 0 1 20.73-12h14.3a24 24 0 0 1 20.74 12l32.74 56.71h15.12L198.81 103h8.51a5.76 5.76 0 0 1 5 2.87l25.39 44h15.12l-29.19-50.6a18.91 18.91 0 0 0-16.32-9.42h-16.07l-1.9-3.31A37.15 37.15 0 0 0 157.27 68H143a37.12 37.12 0 0 0-32.1 18.54L78.77 142.2a37.1 37.1 0 0 0 0 37l32.13 55.66A37.12 37.12 0 0 0 143 253.4h14.3a37.11 37.11 0 0 0 29.9-15.27z" id="path35"/></g></svg>
|
After Width: | Height: | Size: 3.8 KiB |
|
@ -0,0 +1,3 @@
|
|||
# https://www.robotstxt.org/robotstxt.html
|
||||
User-agent: *
|
||||
Disallow:
|
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
@ -0,0 +1,151 @@
|
|||
/*
|
||||
object-assign
|
||||
(c) Sindre Sorhus
|
||||
@license MIT
|
||||
*/
|
||||
|
||||
/*! *****************************************************************************
|
||||
Copyright (c) Microsoft Corporation.
|
||||
|
||||
Permission to use, copy, modify, and/or distribute this software for any
|
||||
purpose with or without fee is hereby granted.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
|
||||
REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
|
||||
INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
|
||||
LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
|
||||
OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
|
||||
PERFORMANCE OF THIS SOFTWARE.
|
||||
***************************************************************************** */
|
||||
|
||||
/*! Hammer.JS - v2.0.17-rc - 2019-12-16
|
||||
* http://naver.github.io/egjs
|
||||
*
|
||||
* Forked By Naver egjs
|
||||
* Copyright (c) hammerjs
|
||||
* Licensed under the MIT license */
|
||||
|
||||
/*! regenerator-runtime -- Copyright (c) 2014-present, Facebook, Inc. -- license (MIT): https://github.com/facebook/regenerator/blob/main/LICENSE */
|
||||
|
||||
/**
|
||||
* @license
|
||||
* Copyright (c) 2012-2013 Chris Pettitt
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
/**
|
||||
* @license
|
||||
* Lodash <https://lodash.com/>
|
||||
* Copyright OpenJS Foundation and other contributors <https://openjsf.org/>
|
||||
* Released under MIT license <https://lodash.com/license>
|
||||
* Based on Underscore.js 1.8.3 <http://underscorejs.org/LICENSE>
|
||||
* Copyright Jeremy Ashkenas, DocumentCloud and Investigative Reporters & Editors
|
||||
*/
|
||||
|
||||
/**
|
||||
* A better abstraction over CSS.
|
||||
*
|
||||
* @copyright Oleg Isonen (Slobodskoi) / Isonen 2014-present
|
||||
* @website https://github.com/cssinjs/jss
|
||||
* @license MIT
|
||||
*/
|
||||
|
||||
/**
|
||||
* vis-timeline and vis-graph2d
|
||||
* https://visjs.github.io/vis-timeline/
|
||||
*
|
||||
* Create a fully customizable, interactive timeline with items and ranges.
|
||||
*
|
||||
* @version 7.7.0
|
||||
* @date 2022-07-10T21:34:08.601Z
|
||||
*
|
||||
* @copyright (c) 2011-2017 Almende B.V, http://almende.com
|
||||
* @copyright (c) 2017-2019 visjs contributors, https://github.com/visjs
|
||||
*
|
||||
* @license
|
||||
* vis.js is dual licensed under both
|
||||
*
|
||||
* 1. The Apache 2.0 License
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* and
|
||||
*
|
||||
* 2. The MIT License
|
||||
* http://opensource.org/licenses/MIT
|
||||
*
|
||||
* vis.js may be distributed under either license.
|
||||
*/
|
||||
|
||||
/** @license React v0.19.1
|
||||
* scheduler.production.min.js
|
||||
*
|
||||
* Copyright (c) Facebook, Inc. and its affiliates.
|
||||
*
|
||||
* This source code is licensed under the MIT license found in the
|
||||
* LICENSE file in the root directory of this source tree.
|
||||
*/
|
||||
|
||||
/** @license React v16.13.1
|
||||
* react-is.production.min.js
|
||||
*
|
||||
* Copyright (c) Facebook, Inc. and its affiliates.
|
||||
*
|
||||
* This source code is licensed under the MIT license found in the
|
||||
* LICENSE file in the root directory of this source tree.
|
||||
*/
|
||||
|
||||
/** @license React v16.14.0
|
||||
* react-dom.production.min.js
|
||||
*
|
||||
* Copyright (c) Facebook, Inc. and its affiliates.
|
||||
*
|
||||
* This source code is licensed under the MIT license found in the
|
||||
* LICENSE file in the root directory of this source tree.
|
||||
*/
|
||||
|
||||
/** @license React v16.14.0
|
||||
* react-jsx-runtime.production.min.js
|
||||
*
|
||||
* Copyright (c) Facebook, Inc. and its affiliates.
|
||||
*
|
||||
* This source code is licensed under the MIT license found in the
|
||||
* LICENSE file in the root directory of this source tree.
|
||||
*/
|
||||
|
||||
/** @license React v16.14.0
|
||||
* react.production.min.js
|
||||
*
|
||||
* Copyright (c) Facebook, Inc. and its affiliates.
|
||||
*
|
||||
* This source code is licensed under the MIT license found in the
|
||||
* LICENSE file in the root directory of this source tree.
|
||||
*/
|
||||
|
||||
/** @license React v17.0.2
|
||||
* react-is.production.min.js
|
||||
*
|
||||
* Copyright (c) Facebook, Inc. and its affiliates.
|
||||
*
|
||||
* This source code is licensed under the MIT license found in the
|
||||
* LICENSE file in the root directory of this source tree.
|
||||
*/
|
||||
|
||||
//! moment.js
|
File diff suppressed because one or more lines are too long
|
@ -0,0 +1,183 @@
|
|||
import React, { Component } from "react";
|
||||
|
||||
import { Route, Switch } from "react-router-dom";
|
||||
import { makeStyles } from "@material-ui/styles";
|
||||
import { Button, AppBar, Toolbar } from "@material-ui/core";
|
||||
import AppLogo from "./plugins/AppLogo";
|
||||
import NavLink from "./components/NavLink";
|
||||
|
||||
import WorkflowSearch from "./pages/executions/WorkflowSearch";
|
||||
import TaskSearch from "./pages/executions/TaskSearch";
|
||||
|
||||
import Execution from "./pages/execution/Execution";
|
||||
import WorkflowDefinitions from "./pages/definitions/Workflow";
|
||||
import WorkflowDefinition from "./pages/definition/WorkflowDefinition";
|
||||
import TaskDefinitions from "./pages/definitions/Task";
|
||||
import TaskDefinition from "./pages/definition/TaskDefinition";
|
||||
import EventHandlerDefinitions from "./pages/definitions/EventHandler";
|
||||
import EventHandlerDefinition from "./pages/definition/EventHandler";
|
||||
import TaskQueue from "./pages/misc/TaskQueue";
|
||||
import KitchenSink from "./pages/kitchensink/KitchenSink";
|
||||
import DiagramTest from "./pages/kitchensink/DiagramTest";
|
||||
import Examples from "./pages/kitchensink/Examples";
|
||||
import Gantt from "./pages/kitchensink/Gantt";
|
||||
|
||||
import CustomRoutes from "./plugins/CustomRoutes";
|
||||
import AppBarModules from "./plugins/AppBarModules";
|
||||
import CustomAppBarButtons from "./plugins/CustomAppBarButtons";
|
||||
import Workbench from "./pages/workbench/Workbench";
|
||||
|
||||
import { Helmet } from "react-helmet";
|
||||
|
||||
const useStyles = makeStyles((theme) => ({
|
||||
root: {
|
||||
backgroundColor: "#efefef",
|
||||
display: "flex",
|
||||
},
|
||||
body: {
|
||||
width: "100vw",
|
||||
height: "100vh",
|
||||
paddingTop: theme.overrides.MuiAppBar.root.height,
|
||||
},
|
||||
toolbarRight: {
|
||||
marginLeft: "auto",
|
||||
display: "flex",
|
||||
flexDirection: "row",
|
||||
},
|
||||
toolbarRegular: {
|
||||
minHeight: 80,
|
||||
},
|
||||
}));
|
||||
|
||||
class AppAuth extends Component{
|
||||
render(){
|
||||
return (
|
||||
<div>
|
||||
<Helmet>
|
||||
<script src="https://cdn.dev.d4science.org/boot/d4s-boot.js"></script>
|
||||
</Helmet>
|
||||
<d4s-boot-2 url="https://accounts.dev.d4science.org/auth" redirect-url="http://localhost/login/callback" gateway="conductor-ui">
|
||||
</d4s-boot-2>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
class AppBody extends Component{
|
||||
constructor(props){
|
||||
super(props)
|
||||
this.state = { open : false }
|
||||
}
|
||||
|
||||
setOpen(v){
|
||||
this.setState({ open : v })
|
||||
}
|
||||
|
||||
componentDidMount() {
|
||||
document.addEventListener("authenticated", ev=>{
|
||||
this.setOpen(true)
|
||||
})
|
||||
}
|
||||
|
||||
render(){
|
||||
const classes = this.props.classes;
|
||||
return !this.state.open ? <div></div> : (
|
||||
<div className={classes.root}>
|
||||
<AppBar position="fixed">
|
||||
<Toolbar
|
||||
classes={{
|
||||
regular: classes.toolbarRegular,
|
||||
}}
|
||||
>
|
||||
<AppLogo />
|
||||
<Button component={NavLink} path="/">
|
||||
Executions
|
||||
</Button>
|
||||
<Button component={NavLink} path="/workflowDefs">
|
||||
Definitions
|
||||
</Button>
|
||||
<Button component={NavLink} path="/taskQueue">
|
||||
Task Queues
|
||||
</Button>
|
||||
<Button component={NavLink} path="/workbench">
|
||||
Workbench
|
||||
</Button>
|
||||
<CustomAppBarButtons />
|
||||
|
||||
<div className={classes.toolbarRight}>
|
||||
<AppBarModules />
|
||||
</div>
|
||||
</Toolbar>
|
||||
</AppBar>
|
||||
<div className={classes.body}>
|
||||
<Switch>
|
||||
<Route exact path="/">
|
||||
<WorkflowSearch />
|
||||
</Route>
|
||||
<Route exact path="/search/by-tasks">
|
||||
<TaskSearch />
|
||||
</Route>
|
||||
<Route path="/execution/:id/:taskId?">
|
||||
<Execution />
|
||||
</Route>
|
||||
<Route exact path="/workflowDefs">
|
||||
<WorkflowDefinitions />
|
||||
</Route>
|
||||
<Route exact path="/workflowDef/:name?/:version?">
|
||||
<WorkflowDefinition />
|
||||
</Route>
|
||||
<Route exact path="/taskDefs">
|
||||
<TaskDefinitions />
|
||||
</Route>
|
||||
<Route exact path="/taskDef/:name?">
|
||||
<TaskDefinition />
|
||||
</Route>
|
||||
<Route exact path="/eventHandlerDef">
|
||||
<EventHandlerDefinitions />
|
||||
</Route>
|
||||
<Route exact path="/eventHandlerDef/:name">
|
||||
<EventHandlerDefinition />
|
||||
</Route>
|
||||
<Route exact path="/taskQueue/:name?">
|
||||
<TaskQueue />
|
||||
</Route>
|
||||
<Route exact path="/workbench">
|
||||
<Workbench />
|
||||
</Route>
|
||||
<Route exact path="/kitchen">
|
||||
<KitchenSink />
|
||||
</Route>
|
||||
<Route exact path="/kitchen/diagram">
|
||||
<DiagramTest />
|
||||
</Route>
|
||||
<Route exact path="/kitchen/examples">
|
||||
<Examples />
|
||||
</Route>
|
||||
<Route exact path="/kitchen/gantt">
|
||||
<Gantt />
|
||||
</Route>
|
||||
<CustomRoutes />
|
||||
</Switch>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
class AppContent extends Component{
|
||||
render(){
|
||||
return(
|
||||
<div>
|
||||
<AppAuth/>
|
||||
<AppBody classes={this.props.classes}/>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
//Keep functional constructor to avoid problems with useStyles
|
||||
export default function App() {
|
||||
const classes = useStyles();
|
||||
|
||||
return <AppContent classes={classes}/>
|
||||
}
|
|
@ -0,0 +1,11 @@
|
|||
[common]
|
||||
loglevel = info
|
||||
threads = 1
|
||||
pollrate = 1
|
||||
|
||||
[pymail]
|
||||
server = smtp-relay.d4science.org
|
||||
user = conductor_dev
|
||||
password =
|
||||
protocol = starttls
|
||||
port = 587
|
|
@ -0,0 +1,26 @@
|
|||
# Database persistence type.
|
||||
conductor.db.type=postgres
|
||||
|
||||
spring.datasource.url=jdbc:postgresql://postgres:5432/conductor
|
||||
spring.datasource.username=conductor
|
||||
spring.datasource.password=conductor
|
||||
|
||||
# Hikari pool sizes are -1 by default and prevent startup
|
||||
spring.datasource.hikari.maximum-pool-size=10
|
||||
spring.datasource.hikari.minimum-idle=2
|
||||
|
||||
# Elastic search instance indexing is disabled.
|
||||
conductor.indexing.enabled=true
|
||||
conductor.elasticsearch.version=7
|
||||
conductor.elasticsearch.url=http://es:9200
|
||||
conductor.elasticsearch.clusterHealthColor=yellow
|
||||
|
||||
#Enable Prometheus
|
||||
conductor.metrics-prometheus.enabled=true
|
||||
management.endpoints.web.exposure.include=prometheus,health,info,metrics
|
||||
|
||||
# GRPC disabled
|
||||
conductor.grpc-server.enabled=false
|
||||
|
||||
# Load sample kitchen sink disabled
|
||||
loadSample=false
|
|
@ -0,0 +1,87 @@
|
|||
upstream conductor_server {
|
||||
ip_hash;
|
||||
server conductor-server:8080;
|
||||
}
|
||||
|
||||
map $http_authorization $source_auth {
|
||||
default "";
|
||||
}
|
||||
|
||||
js_var $auth_token;
|
||||
js_var $pep_credentials;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name conductor conductor.dev.d4science.org;
|
||||
|
||||
location / {
|
||||
# This would be the directory where your React app's static files are stored at
|
||||
root /usr/share/nginx/html;
|
||||
try_files $uri /index.html;
|
||||
}
|
||||
|
||||
location /health {
|
||||
proxy_set_header Host $host;
|
||||
proxy_pass http://conductor_server;
|
||||
}
|
||||
|
||||
location /actuator/prometheus {
|
||||
proxy_set_header Host $host;
|
||||
proxy_pass http://conductor_server;
|
||||
}
|
||||
|
||||
location /api/ {
|
||||
js_content pep.enforce;
|
||||
}
|
||||
|
||||
location @backend {
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-NginX-Proxy true;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
proxy_redirect off;
|
||||
proxy_pass http://conductor_server;
|
||||
}
|
||||
|
||||
location /jwt_verify_request {
|
||||
internal;
|
||||
proxy_method POST;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Authorization $pep_credentials;
|
||||
proxy_set_header Content-Type "application/x-www-form-urlencoded";
|
||||
proxy_pass "https://accounts.dev.d4science.org/auth/realms/d4science/protocol/openid-connect/token/introspect";
|
||||
|
||||
proxy_ignore_headers Cache-Control Expires Set-Cookie;
|
||||
gunzip on;
|
||||
|
||||
proxy_cache token_responses; # Enable caching
|
||||
proxy_cache_key $source_auth; # Cache for each source authentication
|
||||
proxy_cache_lock on; # Duplicate tokens must wait
|
||||
proxy_cache_valid 200 10s; # How long to use each response
|
||||
}
|
||||
|
||||
location /jwt_request {
|
||||
internal;
|
||||
proxy_method POST;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Authorization $pep_credentials;
|
||||
proxy_set_header Content-Type "application/x-www-form-urlencoded";
|
||||
proxy_pass "https://accounts.dev.d4science.org/auth/realms/d4science/protocol/openid-connect/token";
|
||||
gunzip on;
|
||||
}
|
||||
|
||||
location /permission_request {
|
||||
internal;
|
||||
proxy_method POST;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Content-Type "application/x-www-form-urlencoded";
|
||||
proxy_set_header Authorization "Bearer $auth_token";
|
||||
proxy_pass "https://accounts.dev.d4science.org/auth/realms/d4science/protocol/openid-connect/token";
|
||||
gunzip on;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,183 @@
|
|||
import React, { Component } from "react";
|
||||
|
||||
import { Route, Switch } from "react-router-dom";
|
||||
import { makeStyles } from "@material-ui/styles";
|
||||
import { Button, AppBar, Toolbar } from "@material-ui/core";
|
||||
import AppLogo from "./plugins/AppLogo";
|
||||
import NavLink from "./components/NavLink";
|
||||
|
||||
import WorkflowSearch from "./pages/executions/WorkflowSearch";
|
||||
import TaskSearch from "./pages/executions/TaskSearch";
|
||||
|
||||
import Execution from "./pages/execution/Execution";
|
||||
import WorkflowDefinitions from "./pages/definitions/Workflow";
|
||||
import WorkflowDefinition from "./pages/definition/WorkflowDefinition";
|
||||
import TaskDefinitions from "./pages/definitions/Task";
|
||||
import TaskDefinition from "./pages/definition/TaskDefinition";
|
||||
import EventHandlerDefinitions from "./pages/definitions/EventHandler";
|
||||
import EventHandlerDefinition from "./pages/definition/EventHandler";
|
||||
import TaskQueue from "./pages/misc/TaskQueue";
|
||||
import KitchenSink from "./pages/kitchensink/KitchenSink";
|
||||
import DiagramTest from "./pages/kitchensink/DiagramTest";
|
||||
import Examples from "./pages/kitchensink/Examples";
|
||||
import Gantt from "./pages/kitchensink/Gantt";
|
||||
|
||||
import CustomRoutes from "./plugins/CustomRoutes";
|
||||
import AppBarModules from "./plugins/AppBarModules";
|
||||
import CustomAppBarButtons from "./plugins/CustomAppBarButtons";
|
||||
import Workbench from "./pages/workbench/Workbench";
|
||||
|
||||
import { Helmet } from "react-helmet";
|
||||
|
||||
const useStyles = makeStyles((theme) => ({
|
||||
root: {
|
||||
backgroundColor: "#efefef",
|
||||
display: "flex",
|
||||
},
|
||||
body: {
|
||||
width: "100vw",
|
||||
height: "100vh",
|
||||
paddingTop: theme.overrides.MuiAppBar.root.height,
|
||||
},
|
||||
toolbarRight: {
|
||||
marginLeft: "auto",
|
||||
display: "flex",
|
||||
flexDirection: "row",
|
||||
},
|
||||
toolbarRegular: {
|
||||
minHeight: 80,
|
||||
},
|
||||
}));
|
||||
|
||||
class AppAuth extends Component{
|
||||
render(){
|
||||
return (
|
||||
<div>
|
||||
<Helmet>
|
||||
<script src="https://cdn.pre.d4science.org/boot/d4s-boot.js"></script>
|
||||
</Helmet>
|
||||
<d4s-boot-2 url="https://accounts.pre.d4science.org/auth" redirect-url="http://localhost/login/callback" gateway="conductor-ui">
|
||||
</d4s-boot-2>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
class AppBody extends Component{
|
||||
constructor(props){
|
||||
super(props)
|
||||
this.state = { open : false }
|
||||
}
|
||||
|
||||
setOpen(v){
|
||||
this.setState({ open : v })
|
||||
}
|
||||
|
||||
componentDidMount() {
|
||||
document.addEventListener("authenticated", ev=>{
|
||||
this.setOpen(true)
|
||||
})
|
||||
}
|
||||
|
||||
render(){
|
||||
const classes = this.props.classes;
|
||||
return !this.state.open ? <div></div> : (
|
||||
<div className={classes.root}>
|
||||
<AppBar position="fixed">
|
||||
<Toolbar
|
||||
classes={{
|
||||
regular: classes.toolbarRegular,
|
||||
}}
|
||||
>
|
||||
<AppLogo />
|
||||
<Button component={NavLink} path="/">
|
||||
Executions
|
||||
</Button>
|
||||
<Button component={NavLink} path="/workflowDefs">
|
||||
Definitions
|
||||
</Button>
|
||||
<Button component={NavLink} path="/taskQueue">
|
||||
Task Queues
|
||||
</Button>
|
||||
<Button component={NavLink} path="/workbench">
|
||||
Workbench
|
||||
</Button>
|
||||
<CustomAppBarButtons />
|
||||
|
||||
<div className={classes.toolbarRight}>
|
||||
<AppBarModules />
|
||||
</div>
|
||||
</Toolbar>
|
||||
</AppBar>
|
||||
<div className={classes.body}>
|
||||
<Switch>
|
||||
<Route exact path="/">
|
||||
<WorkflowSearch />
|
||||
</Route>
|
||||
<Route exact path="/search/by-tasks">
|
||||
<TaskSearch />
|
||||
</Route>
|
||||
<Route path="/execution/:id/:taskId?">
|
||||
<Execution />
|
||||
</Route>
|
||||
<Route exact path="/workflowDefs">
|
||||
<WorkflowDefinitions />
|
||||
</Route>
|
||||
<Route exact path="/workflowDef/:name?/:version?">
|
||||
<WorkflowDefinition />
|
||||
</Route>
|
||||
<Route exact path="/taskDefs">
|
||||
<TaskDefinitions />
|
||||
</Route>
|
||||
<Route exact path="/taskDef/:name?">
|
||||
<TaskDefinition />
|
||||
</Route>
|
||||
<Route exact path="/eventHandlerDef">
|
||||
<EventHandlerDefinitions />
|
||||
</Route>
|
||||
<Route exact path="/eventHandlerDef/:name">
|
||||
<EventHandlerDefinition />
|
||||
</Route>
|
||||
<Route exact path="/taskQueue/:name?">
|
||||
<TaskQueue />
|
||||
</Route>
|
||||
<Route exact path="/workbench">
|
||||
<Workbench />
|
||||
</Route>
|
||||
<Route exact path="/kitchen">
|
||||
<KitchenSink />
|
||||
</Route>
|
||||
<Route exact path="/kitchen/diagram">
|
||||
<DiagramTest />
|
||||
</Route>
|
||||
<Route exact path="/kitchen/examples">
|
||||
<Examples />
|
||||
</Route>
|
||||
<Route exact path="/kitchen/gantt">
|
||||
<Gantt />
|
||||
</Route>
|
||||
<CustomRoutes />
|
||||
</Switch>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
class AppContent extends Component{
|
||||
render(){
|
||||
return(
|
||||
<div>
|
||||
<AppAuth/>
|
||||
<AppBody classes={this.props.classes}/>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
//Keep functional constructor to avoid problems with useStyles
|
||||
export default function App() {
|
||||
const classes = useStyles();
|
||||
|
||||
return <AppContent classes={classes}/>
|
||||
}
|
|
@ -0,0 +1,11 @@
|
|||
[common]
|
||||
loglevel = info
|
||||
threads = 1
|
||||
pollrate = 1
|
||||
|
||||
[pymail]
|
||||
server = smtp-relay.d4science.org
|
||||
user = conductor_pre
|
||||
password =
|
||||
protocol = starttls
|
||||
port = 587
|
|
@ -0,0 +1,26 @@
|
|||
# Database persistence type.
|
||||
conductor.db.type=postgres
|
||||
|
||||
spring.datasource.url=jdbc:postgresql://postgres:5432/conductor
|
||||
spring.datasource.username=conductor
|
||||
spring.datasource.password=conductor
|
||||
|
||||
# Hikari pool sizes are -1 by default and prevent startup
|
||||
spring.datasource.hikari.maximum-pool-size=10
|
||||
spring.datasource.hikari.minimum-idle=2
|
||||
|
||||
# Elastic search instance indexing is disabled.
|
||||
conductor.indexing.enabled=true
|
||||
conductor.elasticsearch.version=7
|
||||
conductor.elasticsearch.url=http://es:9200
|
||||
conductor.elasticsearch.clusterHealthColor=yellow
|
||||
|
||||
#Enable Prometheus
|
||||
conductor.metrics-prometheus.enabled=true
|
||||
management.endpoints.web.exposure.include=prometheus,health,info,metrics
|
||||
|
||||
# GRPC disabled
|
||||
conductor.grpc-server.enabled=false
|
||||
|
||||
# Load sample kitchen sink disabled
|
||||
loadSample=false
|
|
@ -0,0 +1,88 @@
|
|||
upstream conductor_server {
|
||||
ip_hash;
|
||||
server conductor-server:8080;
|
||||
}
|
||||
|
||||
map $http_authorization $source_auth {
|
||||
default "";
|
||||
}
|
||||
|
||||
js_var $auth_token;
|
||||
js_var $pep_credentials;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name conductor.pre.d4science.org;
|
||||
|
||||
location / {
|
||||
# This would be the directory where your React app's static files are stored at
|
||||
root /usr/share/nginx/html;
|
||||
try_files $uri /index.html;
|
||||
}
|
||||
|
||||
location /health {
|
||||
proxy_set_header Host $host;
|
||||
proxy_pass http://conductor_server;
|
||||
}
|
||||
|
||||
location /actuator/prometheus {
|
||||
proxy_set_header Host $host;
|
||||
proxy_pass http://conductor_server;
|
||||
}
|
||||
|
||||
location /api/ {
|
||||
js_content pep.enforce;
|
||||
}
|
||||
|
||||
location @backend {
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-NginX-Proxy true;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
proxy_redirect off;
|
||||
proxy_pass http://conductor_server;
|
||||
}
|
||||
|
||||
location /jwt_verify_request {
|
||||
internal;
|
||||
proxy_method POST;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Authorization $pep_credentials;
|
||||
proxy_set_header Content-Type "application/x-www-form-urlencoded";
|
||||
proxy_set_header Accept-Encoding identity;
|
||||
proxy_pass "https://accounts.pre.d4science.org/auth/realms/d4science/protocol/openid-connect/token/introspect";
|
||||
|
||||
proxy_ignore_headers Cache-Control Expires Set-Cookie;
|
||||
|
||||
proxy_cache token_responses; # Enable caching
|
||||
proxy_cache_key $source_auth; # Cache for each source authentication
|
||||
proxy_cache_lock on; # Duplicate tokens must wait
|
||||
proxy_cache_valid 200 10s; # How long to use each response
|
||||
}
|
||||
|
||||
location /jwt_request {
|
||||
internal;
|
||||
proxy_method POST;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Authorization $pep_credentials;
|
||||
proxy_set_header Content-Type "application/x-www-form-urlencoded";
|
||||
proxy_set_header Accept-Encoding identity;
|
||||
|
||||
proxy_pass "https://accounts.pre.d4science.org/auth/realms/d4science/protocol/openid-connect/token";
|
||||
}
|
||||
|
||||
location /permission_request {
|
||||
internal;
|
||||
proxy_method POST;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Content-Type "application/x-www-form-urlencoded";
|
||||
proxy_set_header Authorization "Bearer $auth_token";
|
||||
proxy_set_header Accept-Encoding identity;
|
||||
proxy_pass "https://accounts.pre.d4science.org/auth/realms/d4science/protocol/openid-connect/token";
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,183 @@
|
|||
import React, { Component } from "react";
|
||||
|
||||
import { Route, Switch } from "react-router-dom";
|
||||
import { makeStyles } from "@material-ui/styles";
|
||||
import { Button, AppBar, Toolbar } from "@material-ui/core";
|
||||
import AppLogo from "./plugins/AppLogo";
|
||||
import NavLink from "./components/NavLink";
|
||||
|
||||
import WorkflowSearch from "./pages/executions/WorkflowSearch";
|
||||
import TaskSearch from "./pages/executions/TaskSearch";
|
||||
|
||||
import Execution from "./pages/execution/Execution";
|
||||
import WorkflowDefinitions from "./pages/definitions/Workflow";
|
||||
import WorkflowDefinition from "./pages/definition/WorkflowDefinition";
|
||||
import TaskDefinitions from "./pages/definitions/Task";
|
||||
import TaskDefinition from "./pages/definition/TaskDefinition";
|
||||
import EventHandlerDefinitions from "./pages/definitions/EventHandler";
|
||||
import EventHandlerDefinition from "./pages/definition/EventHandler";
|
||||
import TaskQueue from "./pages/misc/TaskQueue";
|
||||
import KitchenSink from "./pages/kitchensink/KitchenSink";
|
||||
import DiagramTest from "./pages/kitchensink/DiagramTest";
|
||||
import Examples from "./pages/kitchensink/Examples";
|
||||
import Gantt from "./pages/kitchensink/Gantt";
|
||||
|
||||
import CustomRoutes from "./plugins/CustomRoutes";
|
||||
import AppBarModules from "./plugins/AppBarModules";
|
||||
import CustomAppBarButtons from "./plugins/CustomAppBarButtons";
|
||||
import Workbench from "./pages/workbench/Workbench";
|
||||
|
||||
import { Helmet } from "react-helmet";
|
||||
|
||||
const useStyles = makeStyles((theme) => ({
|
||||
root: {
|
||||
backgroundColor: "#efefef",
|
||||
display: "flex",
|
||||
},
|
||||
body: {
|
||||
width: "100vw",
|
||||
height: "100vh",
|
||||
paddingTop: theme.overrides.MuiAppBar.root.height,
|
||||
},
|
||||
toolbarRight: {
|
||||
marginLeft: "auto",
|
||||
display: "flex",
|
||||
flexDirection: "row",
|
||||
},
|
||||
toolbarRegular: {
|
||||
minHeight: 80,
|
||||
},
|
||||
}));
|
||||
|
||||
class AppAuth extends Component{
|
||||
render(){
|
||||
return (
|
||||
<div>
|
||||
<Helmet>
|
||||
<script src="https://cdn.pre.d4science.org/boot/d4s-boot.js"></script>
|
||||
</Helmet>
|
||||
<d4s-boot-2 url="https://accounts.d4science.org/auth" redirect-url="http://localhost/login/callback" gateway="conductor-ui">
|
||||
</d4s-boot-2>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
class AppBody extends Component{
|
||||
constructor(props){
|
||||
super(props)
|
||||
this.state = { open : false }
|
||||
}
|
||||
|
||||
setOpen(v){
|
||||
this.setState({ open : v })
|
||||
}
|
||||
|
||||
componentDidMount() {
|
||||
document.addEventListener("authenticated", ev=>{
|
||||
this.setOpen(true)
|
||||
})
|
||||
}
|
||||
|
||||
render(){
|
||||
const classes = this.props.classes;
|
||||
return !this.state.open ? <div></div> : (
|
||||
<div className={classes.root}>
|
||||
<AppBar position="fixed">
|
||||
<Toolbar
|
||||
classes={{
|
||||
regular: classes.toolbarRegular,
|
||||
}}
|
||||
>
|
||||
<AppLogo />
|
||||
<Button component={NavLink} path="/">
|
||||
Executions
|
||||
</Button>
|
||||
<Button component={NavLink} path="/workflowDefs">
|
||||
Definitions
|
||||
</Button>
|
||||
<Button component={NavLink} path="/taskQueue">
|
||||
Task Queues
|
||||
</Button>
|
||||
<Button component={NavLink} path="/workbench">
|
||||
Workbench
|
||||
</Button>
|
||||
<CustomAppBarButtons />
|
||||
|
||||
<div className={classes.toolbarRight}>
|
||||
<AppBarModules />
|
||||
</div>
|
||||
</Toolbar>
|
||||
</AppBar>
|
||||
<div className={classes.body}>
|
||||
<Switch>
|
||||
<Route exact path="/">
|
||||
<WorkflowSearch />
|
||||
</Route>
|
||||
<Route exact path="/search/by-tasks">
|
||||
<TaskSearch />
|
||||
</Route>
|
||||
<Route path="/execution/:id/:taskId?">
|
||||
<Execution />
|
||||
</Route>
|
||||
<Route exact path="/workflowDefs">
|
||||
<WorkflowDefinitions />
|
||||
</Route>
|
||||
<Route exact path="/workflowDef/:name?/:version?">
|
||||
<WorkflowDefinition />
|
||||
</Route>
|
||||
<Route exact path="/taskDefs">
|
||||
<TaskDefinitions />
|
||||
</Route>
|
||||
<Route exact path="/taskDef/:name?">
|
||||
<TaskDefinition />
|
||||
</Route>
|
||||
<Route exact path="/eventHandlerDef">
|
||||
<EventHandlerDefinitions />
|
||||
</Route>
|
||||
<Route exact path="/eventHandlerDef/:name">
|
||||
<EventHandlerDefinition />
|
||||
</Route>
|
||||
<Route exact path="/taskQueue/:name?">
|
||||
<TaskQueue />
|
||||
</Route>
|
||||
<Route exact path="/workbench">
|
||||
<Workbench />
|
||||
</Route>
|
||||
<Route exact path="/kitchen">
|
||||
<KitchenSink />
|
||||
</Route>
|
||||
<Route exact path="/kitchen/diagram">
|
||||
<DiagramTest />
|
||||
</Route>
|
||||
<Route exact path="/kitchen/examples">
|
||||
<Examples />
|
||||
</Route>
|
||||
<Route exact path="/kitchen/gantt">
|
||||
<Gantt />
|
||||
</Route>
|
||||
<CustomRoutes />
|
||||
</Switch>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
class AppContent extends Component{
|
||||
render(){
|
||||
return(
|
||||
<div>
|
||||
<AppAuth/>
|
||||
<AppBody classes={this.props.classes}/>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
//Keep functional constructor to avoid problems with useStyles
|
||||
export default function App() {
|
||||
const classes = useStyles();
|
||||
|
||||
return <AppContent classes={classes}/>
|
||||
}
|
|
@ -0,0 +1,11 @@
|
|||
[common]
|
||||
loglevel = info
|
||||
threads = 1
|
||||
pollrate = 1
|
||||
|
||||
[pymail]
|
||||
server = smtp-relay.d4science.org
|
||||
user = conductor_prod
|
||||
password =
|
||||
protocol = starttls
|
||||
port = 587
|
|
@ -0,0 +1,26 @@
|
|||
# Database persistence type.
|
||||
conductor.db.type=postgres
|
||||
|
||||
spring.datasource.url=jdbc:postgresql://postgresql-srv.d4science.org:5432/conductor
|
||||
spring.datasource.username=conductor_u
|
||||
spring.datasource.password=c36dda661add7c2b5093087ddb655992
|
||||
|
||||
# Hikari pool sizes are -1 by default and prevent startup
|
||||
spring.datasource.hikari.maximum-pool-size=10
|
||||
spring.datasource.hikari.minimum-idle=2
|
||||
|
||||
# Elastic search instance indexing is disabled.
|
||||
conductor.indexing.enabled=true
|
||||
conductor.elasticsearch.version=7
|
||||
conductor.elasticsearch.url=http://es:9200
|
||||
conductor.elasticsearch.clusterHealthColor=yellow
|
||||
|
||||
#Enable Prometheus
|
||||
conductor.metrics-prometheus.enabled=true
|
||||
management.endpoints.web.exposure.include=prometheus,health,info,metrics
|
||||
|
||||
# GRPC disabled
|
||||
conductor.grpc-server.enabled=false
|
||||
|
||||
# Load sample kitchen sink disabled
|
||||
loadSample=false
|
|
@ -0,0 +1,88 @@
|
|||
upstream conductor_server {
|
||||
ip_hash;
|
||||
server conductor-server:8080;
|
||||
}
|
||||
|
||||
map $http_authorization $source_auth {
|
||||
default "";
|
||||
}
|
||||
|
||||
js_var $auth_token;
|
||||
js_var $pep_credentials;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name conductor.d4science.org;
|
||||
|
||||
location / {
|
||||
# This would be the directory where your React app's static files are stored at
|
||||
root /usr/share/nginx/html;
|
||||
try_files $uri /index.html;
|
||||
}
|
||||
|
||||
location /health {
|
||||
proxy_set_header Host $host;
|
||||
proxy_pass http://conductor_server;
|
||||
}
|
||||
|
||||
location /actuator/prometheus {
|
||||
proxy_set_header Host $host;
|
||||
proxy_pass http://conductor_server;
|
||||
}
|
||||
|
||||
location /api/ {
|
||||
js_content pep.enforce;
|
||||
}
|
||||
|
||||
location @backend {
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-NginX-Proxy true;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
proxy_redirect off;
|
||||
proxy_pass http://conductor_server;
|
||||
}
|
||||
|
||||
location /jwt_verify_request {
|
||||
internal;
|
||||
proxy_method POST;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Authorization $pep_credentials;
|
||||
proxy_set_header Content-Type "application/x-www-form-urlencoded";
|
||||
proxy_set_header Accept-Encoding identity;
|
||||
proxy_pass "https://accounts.d4science.org/auth/realms/d4science/protocol/openid-connect/token/introspect";
|
||||
|
||||
proxy_ignore_headers Cache-Control Expires Set-Cookie;
|
||||
|
||||
proxy_cache token_responses; # Enable caching
|
||||
proxy_cache_key $source_auth; # Cache for each source authentication
|
||||
proxy_cache_lock on; # Duplicate tokens must wait
|
||||
proxy_cache_valid 200 10s; # How long to use each response
|
||||
}
|
||||
|
||||
location /jwt_request {
|
||||
internal;
|
||||
proxy_method POST;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Authorization $pep_credentials;
|
||||
proxy_set_header Content-Type "application/x-www-form-urlencoded";
|
||||
proxy_set_header Accept-Encoding identity;
|
||||
|
||||
proxy_pass "https://accounts.d4science.org/auth/realms/d4science/protocol/openid-connect/token";
|
||||
}
|
||||
|
||||
location /permission_request {
|
||||
internal;
|
||||
proxy_method POST;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Content-Type "application/x-www-form-urlencoded";
|
||||
proxy_set_header Authorization "Bearer $auth_token";
|
||||
proxy_set_header Accept-Encoding identity;
|
||||
proxy_pass "https://accounts.d4science.org/auth/realms/d4science/protocol/openid-connect/token";
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,12 @@
|
|||
# In memory database persistence model.
|
||||
conductor.db.type=memory
|
||||
|
||||
# Elastic search instance indexing is disabled.
|
||||
conductor.indexing.enabled=false
|
||||
conductor.elasticsearch.clusterHealthColor=yellow
|
||||
|
||||
# GRPC disabled
|
||||
conductor.grpc-server.enabled=false
|
||||
|
||||
# Load sample kitchen sink disabled
|
||||
loadSample=false
|
|
@ -0,0 +1,42 @@
|
|||
import { useEnv } from "./env";
|
||||
|
||||
export function useFetchContext() {
|
||||
const { stack } = useEnv();
|
||||
return {
|
||||
stack,
|
||||
ready: true,
|
||||
};
|
||||
}
|
||||
export function fetchWithContext(
|
||||
path,
|
||||
context,
|
||||
fetchParams,
|
||||
isJsonResponse = true
|
||||
) {
|
||||
const newParams = { ...fetchParams };
|
||||
|
||||
const newPath = `/api/${path}`;
|
||||
const cleanPath = newPath.replace(/([^:]\/)\/+/g, "$1"); // Cleanup duplicated slashes
|
||||
|
||||
const boot = document.querySelector("d4s-boot-2")
|
||||
|
||||
return boot.secureFetch(cleanPath, newParams)
|
||||
.then((res) => Promise.all([res, res.text()]))
|
||||
.then(([res, text]) => {
|
||||
if (!res.ok) {
|
||||
// get error message from body or default to response status
|
||||
const error = text || res.status;
|
||||
return Promise.reject(error);
|
||||
} else if (!text || text.length === 0) {
|
||||
return null;
|
||||
} else if (!isJsonResponse) {
|
||||
return text;
|
||||
} else {
|
||||
try {
|
||||
return JSON.parse(text);
|
||||
} catch (e) {
|
||||
return text;
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
|
@ -0,0 +1,139 @@
|
|||
export default { config };
|
||||
|
||||
var config = {
|
||||
"hosts" : [
|
||||
{
|
||||
"host": ["conductor.d4science.org", "conductor.pre.d4science.org", "conductor.dev.d4science.org", "conductor.int.d4science.net", "conductor"],
|
||||
"audience" : "conductor-server",
|
||||
"allow-basic-auth" : true,
|
||||
"paths" : [
|
||||
{
|
||||
"name" : "metadata",
|
||||
"path" : "^/api/metadata/(taskdefs|workflow)/?.*$",
|
||||
"methods" : [
|
||||
{
|
||||
"method" : "GET",
|
||||
"scopes" : ["get","list"]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name" : "metadata.taskdefs",
|
||||
"path" : "^/api/metadata/taskdefs/?.*$",
|
||||
"methods" : [
|
||||
{
|
||||
"method" : "POST",
|
||||
"scopes" : ["create"]
|
||||
},
|
||||
{
|
||||
"method" : "DELETE",
|
||||
"scopes" : ["delete"],
|
||||
},
|
||||
{
|
||||
"method" : "PUT",
|
||||
"scopes" : ["update"],
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name" : "metadata.workflow",
|
||||
"path" : "^/api/metadata/workflow/?.*$",
|
||||
"methods" : [
|
||||
{
|
||||
"method" : "POST",
|
||||
"scopes" : ["create"]
|
||||
},
|
||||
{
|
||||
"method" : "DELETE",
|
||||
"scopes" : ["delete"],
|
||||
},
|
||||
{
|
||||
"method" : "PUT",
|
||||
"scopes" : ["update"],
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name" : "workflow",
|
||||
"path" : "^/api/workflow/?.*$",
|
||||
"methods" : [
|
||||
{
|
||||
"method" : "GET",
|
||||
"scopes" : ["get"],
|
||||
},
|
||||
{
|
||||
"method" : "POST",
|
||||
"scopes" : ["start"],
|
||||
},
|
||||
{
|
||||
"method" : "DELETE",
|
||||
"scopes" : ["terminate"],
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name" : "event",
|
||||
"path" : "^/api/event/?.*$",
|
||||
"methods" : [
|
||||
{
|
||||
"method" : "GET",
|
||||
"scopes" : ["get"],
|
||||
},
|
||||
{
|
||||
"method" : "POST",
|
||||
"scopes" : ["create"],
|
||||
},
|
||||
{
|
||||
"method" : "DELETE",
|
||||
"scopes" : ["delete"],
|
||||
},
|
||||
{
|
||||
"method" : "PUT",
|
||||
"scopes" : ["update"],
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name" : "task",
|
||||
"path" : "^/api/tasks/poll/.+$",
|
||||
"methods" : [
|
||||
{
|
||||
"method" : "GET",
|
||||
"scopes" : ["poll"],
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name" : "queue",
|
||||
"path" : "^/api/tasks/queue/.+$",
|
||||
"methods" : [
|
||||
{
|
||||
"method" : "GET",
|
||||
"scopes" : ["get"],
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name" : "task",
|
||||
"path" : "^/api/tasks[/]?$",
|
||||
"methods" : [
|
||||
{
|
||||
"method" : "POST",
|
||||
"scopes" : ["update"],
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name" : "log",
|
||||
"path" : "^/api/tasks/.+/log$",
|
||||
"methods" : [
|
||||
{
|
||||
"method" : "GET",
|
||||
"scopes" : ["get"],
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
|
@ -0,0 +1,45 @@
|
|||
# Added to load njs module
|
||||
load_module modules/ngx_http_js_module.so;
|
||||
|
||||
user nginx;
|
||||
worker_processes auto;
|
||||
|
||||
error_log /var/log/nginx/error.log notice;
|
||||
pid /var/run/nginx.pid;
|
||||
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
env pep_credentials;
|
||||
|
||||
http {
|
||||
|
||||
# added to import pep script
|
||||
js_import pep.js;
|
||||
|
||||
# added to bind enforce function
|
||||
js_set $authorization pep.enforce;
|
||||
|
||||
# added to create cache for tokens and auth calls
|
||||
proxy_cache_path /var/cache/nginx/pep keys_zone=token_responses:1m max_size=2m;
|
||||
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for"';
|
||||
|
||||
access_log /var/log/nginx/access.log main;
|
||||
|
||||
sendfile on;
|
||||
#tcp_nopush on;
|
||||
|
||||
keepalive_timeout 65;
|
||||
|
||||
#gzip on;
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
}
|
|
@ -0,0 +1,332 @@
|
|||
export default { enforce };
|
||||
|
||||
import defaultExport from './config.js';
|
||||
|
||||
function log(c, s){
|
||||
c.request.error(s)
|
||||
}
|
||||
|
||||
var _debug = true
|
||||
function debug(c, s){
|
||||
if(_debug === true){
|
||||
log(c, s)
|
||||
}
|
||||
}
|
||||
|
||||
function enforce(r) {
|
||||
|
||||
var context = {
|
||||
request: r ,
|
||||
config : defaultExport["config"],
|
||||
backend : (defaultExport.backend ? defaultExport.backend : "@backend")
|
||||
}
|
||||
|
||||
log(context, "Inside NJS enforce for " + r.method + " @ " + r.headersIn.host + "/" + r.uri)
|
||||
|
||||
context = computeProtection(context)
|
||||
|
||||
wkf.run(wkf.build(context), context)
|
||||
}
|
||||
|
||||
// ######## WORKFLOW FUNCTIONS ###############
|
||||
var wkf = {
|
||||
|
||||
build : (context)=>{
|
||||
var actions = [
|
||||
"export_pep_credentials",
|
||||
"parse_authentication",
|
||||
"check_authentication",
|
||||
"export_authn_token",
|
||||
"pip",
|
||||
"pdp",
|
||||
"export_backend_headers",
|
||||
"pass"
|
||||
]
|
||||
return actions
|
||||
},
|
||||
|
||||
run : (actions, context) => {
|
||||
context.request.error("Starting workflow with " + njs.dump(actions))
|
||||
var w = actions.reduce(
|
||||
(acc, f) => { return acc.then(typeof(f) === "function" ? f : wkf[f]) },
|
||||
Promise.resolve().then(()=>context)
|
||||
)
|
||||
w.catch(e => { context.request.error(njs.dump(e)); context.request.return(401)} )
|
||||
},
|
||||
|
||||
export_pep_credentials : exportPepCredentials,
|
||||
export_authn_token : exportAuthToken,
|
||||
export_backend_headers : exportBackendHeaders,
|
||||
parse_authentication : parseAuthentication,
|
||||
check_authentication : checkAuthentication,
|
||||
verify_token : verifyToken,
|
||||
request_token : requestToken,
|
||||
pip : pipExecutor,
|
||||
pdp : pdpExecutor,
|
||||
pass : pass,
|
||||
|
||||
//PIP utilities
|
||||
"get-path-component" : (c, i) => c.request.uri.split("/")[i],
|
||||
"get-token-field" : getTokenField,
|
||||
"get-contexts" : (c) => {
|
||||
var ra = c.authn.verified_token["resource_access"]
|
||||
if(ra){
|
||||
var out = [];
|
||||
for(var k in ra){
|
||||
if(ra[k].roles && ra[k].roles.length !== 0) out.push(k)
|
||||
}
|
||||
}
|
||||
return out;
|
||||
}
|
||||
}
|
||||
|
||||
function getTokenField(context, f){
|
||||
return context.authn.verified_token[f]
|
||||
}
|
||||
|
||||
function exportVariable(context, name, value){
|
||||
context.request.variables[name] = value
|
||||
return context
|
||||
}
|
||||
|
||||
function exportBackendHeaders(context){
|
||||
return context
|
||||
}
|
||||
|
||||
function exportPepCredentials(context){
|
||||
if(process.env["pep_credentials"] || process.env["PEP_CREDENTIALS"]){
|
||||
return exportVariable(context, "pep_credentials", "Basic " + process.env["PEP_CREDENTIALS"])
|
||||
}else if(context.config["pep_credentials"]){
|
||||
return exportVariable(context, "pep_credentials", "Basic " + context.config["pep_credentials"])
|
||||
}else{
|
||||
throw new Error("Need PEP credentials")
|
||||
}
|
||||
}
|
||||
|
||||
function exportAuthToken(context){
|
||||
return exportVariable(context, "auth_token", context.authn.token)
|
||||
}
|
||||
|
||||
function checkAuthentication(context){
|
||||
return context.authn.type === "bearer" ? wkf.verify_token(context) : wkf.request_token(context)
|
||||
}
|
||||
|
||||
function parseAuthentication(context){
|
||||
context.request.log("Inside parseAuthentication")
|
||||
var incomingauth = context.request.headersIn["Authorization"]
|
||||
|
||||
if(!incomingauth) throw new Error("Authentication required");
|
||||
|
||||
var arr = incomingauth.trim().replace(/\s\s+/g, " ").split(" ")
|
||||
if(arr.length != 2) throw new Error("Unknown authentication scheme");
|
||||
|
||||
var type = arr[0].toLowerCase()
|
||||
if(type === "basic" && context.authz.host && context.authz.host["allow-basic-auth"]){
|
||||
var unamepass = Buffer.from(arr[1], 'base64').toString().split(":")
|
||||
if(unamepass.length != 2) return null;
|
||||
context.authn = { type : type, raw : arr[1], user : unamepass[0], password : unamepass[1]}
|
||||
return context
|
||||
}else if(type === "bearer"){
|
||||
context.authn = { type : type, raw : arr[1], token : arr[1]}
|
||||
return context
|
||||
}
|
||||
throw new Error("Unknown authentication scheme");
|
||||
}
|
||||
|
||||
function verifyToken(context){
|
||||
log(context, "Inside verifyToken")
|
||||
debug(context, "Token is " + context.authn.token)
|
||||
var options = {
|
||||
"body" : "token=" + context.authn.token + "&token_type_hint=access_token"
|
||||
}
|
||||
return context.request.subrequest("/jwt_verify_request", options)
|
||||
.then(reply=>{
|
||||
if (reply.status === 200) {
|
||||
var response = JSON.parse(reply.responseBody);
|
||||
if (response.active === true) {
|
||||
return response
|
||||
} else {
|
||||
throw new Error("Unauthorized: " + reply.responseBody)
|
||||
}
|
||||
} else {
|
||||
throw new Error("Unauthorized: " + reply.responseBody)
|
||||
}
|
||||
}).then(verified_token => {
|
||||
context.authn.verified_token =
|
||||
JSON.parse(Buffer.from(context.authn.token.split('.')[1], 'base64url').toString())
|
||||
return context
|
||||
})
|
||||
}
|
||||
|
||||
function requestToken(context){
|
||||
log(context, "Inside requestToken")
|
||||
var options = {
|
||||
"body" : "grant_type=client_credentials&client_id="+context.authn.user+"&client_secret="+context.authn.password
|
||||
}
|
||||
return context.request.subrequest("/jwt_request", options)
|
||||
.then(reply=>{
|
||||
if (reply.status === 200) {
|
||||
var response = JSON.parse(reply.responseBody);
|
||||
context.authn.token = response.access_token
|
||||
context.authn.verified_token =
|
||||
JSON.parse(Buffer.from(context.authn.token.split('.')[1], 'base64url').toString())
|
||||
return context
|
||||
} else if (reply.status === 400 || reply.status === 401){
|
||||
var options = {
|
||||
"body" : "grant_type=password&username="+context.authn.user+"&password="+context.authn.password
|
||||
}
|
||||
return context.request.subrequest("/jwt_request", options)
|
||||
.then( reply=>{
|
||||
if (reply.status === 200) {
|
||||
var response = JSON.parse(reply.responseBody);
|
||||
context.authn.token = response.access_token
|
||||
context.authn.verified_token =
|
||||
JSON.parse(Buffer.from(context.authn.token.split('.')[1], 'base64url').toString())
|
||||
return context
|
||||
} else{
|
||||
throw new Error("Unauthorized " + reply.status)
|
||||
}
|
||||
})
|
||||
} else {
|
||||
throw new Error("Unauthorized " + reply.status)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
function pipExecutor(context){
|
||||
log(context, "Inside extra claims PIP")
|
||||
context.authz.pip.forEach(extra =>{
|
||||
//call extra claim pip function
|
||||
try{
|
||||
var operator = extra.operator
|
||||
var result = wkf[operator](context, extra.args)
|
||||
//ensure array and add to extra_claims
|
||||
if(!(result instanceof Array)) result = [result]
|
||||
if(!context.extra_claims) context.extra_claims = {};
|
||||
context.extra_claims[extra.claim] = result
|
||||
} catch (error){
|
||||
log(context, "Skipping invalid extra claim " + njs.dump(error))
|
||||
}
|
||||
})
|
||||
log(context, "Extra claims are " + njs.dump(context.extra_claims))
|
||||
return context
|
||||
}
|
||||
|
||||
function pdpExecutor(context){
|
||||
log(context, "Inside PDP")
|
||||
return context.authz.pdp(context)
|
||||
}
|
||||
|
||||
function umaCall(context){
|
||||
log(context, "Inside UMA call")
|
||||
var options = { "body" : computePermissionRequestBody(context) };
|
||||
return context.request.subrequest("/permission_request", options)
|
||||
.then(reply =>{
|
||||
if(reply.status === 200){
|
||||
debug(context, "UMA call reply is " + reply.status)
|
||||
return context
|
||||
}else{
|
||||
throw new Error("Response for authorization request is not ok " + reply.status + " " + njs.dump(reply.responseBody))
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
function pass(context){
|
||||
log(context, "Inside pass");
|
||||
if(typeof(context.backend) === "string") context.request.internalRedirect(context.backend);
|
||||
else if (typeof(context.backend) === "function") context.request.internalRedirect(context.backend(context))
|
||||
return context;
|
||||
}
|
||||
|
||||
// ######## AUTHORIZATION PART ###############
|
||||
function computePermissionRequestBody(context){
|
||||
|
||||
if(!context.authz.host || !context.authz.path ){
|
||||
throw new Error("Enforcemnt mode is always enforcing. Host or path not found...")
|
||||
}
|
||||
|
||||
var audience = computeAudience(context)
|
||||
var grant = "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket"
|
||||
var mode = "response_mode=decision"
|
||||
var permissions = computePermissions(context)
|
||||
var extra = ""
|
||||
if(context.extra_claims){
|
||||
extra =
|
||||
"claim_token_format=urn:ietf:params:oauth:token-type:jwt&claim_token=" +
|
||||
JSON.stringify(context.extra_claims).toString("base64url")
|
||||
}
|
||||
var body = audience + "&" + grant + "&" + permissions + "&" + mode + "&" + extra
|
||||
context.request.error("Computed permission request body is " + body)
|
||||
return body
|
||||
}
|
||||
|
||||
function computeAudience(context){
|
||||
var aud = context.request.headersIn.host
|
||||
if(context.authz.host){
|
||||
aud = context.authz.host.audience||context.authz.host.host
|
||||
}
|
||||
return "audience=" + aud
|
||||
}
|
||||
|
||||
function computePermissions(context){
|
||||
var resource = context.request.uri
|
||||
if(context.authz.path){
|
||||
resource = context.authz.path.name||context.authz.path.path
|
||||
}
|
||||
var scopes = []
|
||||
if(context.authz.method && context.authz.method.scopes){
|
||||
scopes = context.authz.method.scopes
|
||||
}
|
||||
if(scopes.length > 0){
|
||||
return scopes.map(s=>"permission=" + resource + "#" + s).join("&")
|
||||
}
|
||||
return "permission=" + resource
|
||||
}
|
||||
|
||||
function getPath(hostconfig, incomingpath, incomingmethod){
|
||||
var paths = hostconfig.paths || []
|
||||
var matchingpaths = paths
|
||||
.filter(p => {return incomingpath.match(p.path) != null})
|
||||
.reduce((acc, p) => {
|
||||
if (!p.methods || p.methods.length === 0) acc.weak.push({ path: p});
|
||||
else{
|
||||
var matchingmethods = p.methods.filter(m=>m.method.toUpperCase() === incomingmethod)
|
||||
if(matchingmethods.length > 0) acc.strong.push({ method : matchingmethods[0], path: p});
|
||||
}
|
||||
return acc;
|
||||
}, { strong: [], weak: []})
|
||||
return matchingpaths.strong.concat(matchingpaths.weak)[0]
|
||||
}
|
||||
|
||||
function getHost(config, host){
|
||||
var matching = config.hosts.filter(h=>{
|
||||
//compare for both string and array of strings
|
||||
return ((h.host.filter && h.host.indexOf(host) !== -1) || h.host === host)
|
||||
})
|
||||
return matching.length > 0 ? matching[0] : null
|
||||
}
|
||||
|
||||
function computeProtection(context){
|
||||
debug(context, "Getting by host " + context.request.headersIn.host)
|
||||
context.authz = {}
|
||||
context.authz.host = getHost(context.config, context.request.headersIn.host)
|
||||
if(context.authz.host !== null){
|
||||
log(context, "Host found:" + context.authz.host)
|
||||
context.authz.pip = context.authz.host.pip ? context.authz.host.pip : [];
|
||||
context.authz.pdp = context.authz.host.pdp ? context.authz.host.pdp : umaCall;
|
||||
var pathandmethod = getPath(context.authz.host, context.request.uri, context.request.method);
|
||||
if(pathandmethod){
|
||||
context.authz.path = pathandmethod.path;
|
||||
context.authz.pip = context.authz.path.pip ? context.authz.pip.concat(context.authz.path.pip) : context.authz.pip;
|
||||
context.authz.pdp = context.authz.path.pdp ? context.authz.path.pdp : context.authz.pdp;
|
||||
context.authz.method = pathandmethod.method;
|
||||
if(context.authz.method){
|
||||
context.authz.pip = context.authz.method.pip ? context.authz.pip.concat(context.authz.method.pip) : context.authz.pip;
|
||||
context.authz.pdp = context.authz.method.pdp ? context.authz.method.pdp : context.authz.pdp;
|
||||
}
|
||||
}
|
||||
}
|
||||
debug(context, "Leaving protection computation: ")
|
||||
return context
|
||||
}
|
||||
|
|
@ -0,0 +1,150 @@
|
|||
version: '3.6'
|
||||
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:14
|
||||
environment:
|
||||
- POSTGRES_USER=conductor
|
||||
- POSTGRES_PASSWORD=conductor
|
||||
volumes:
|
||||
- pg_db_data:/var/lib/postgresql/data
|
||||
networks:
|
||||
- conductor-network
|
||||
healthcheck:
|
||||
test: timeout 5 bash -c 'cat < /dev/null > /dev/tcp/localhost/5432'
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 12
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
restart_policy:
|
||||
condition: any
|
||||
delay: 5s
|
||||
window: 120s
|
||||
placement:
|
||||
constraints: [node.role == worker]
|
||||
logging:
|
||||
driver: "journald"
|
||||
|
||||
es:
|
||||
image: elasticsearch:7.6.2
|
||||
environment:
|
||||
- "ES_JAVA_OPTS=-Xms512m -Xmx1024m"
|
||||
- transport.host=0.0.0.0
|
||||
- discovery.type=single-node
|
||||
- xpack.security.enabled=false
|
||||
volumes:
|
||||
- es_data:/usr/share/elasticsearch/data
|
||||
networks:
|
||||
- conductor-network
|
||||
healthcheck:
|
||||
test: timeout 5 bash -c 'cat < /dev/null > /dev/tcp/localhost/9300'
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 12
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
restart_policy:
|
||||
condition: any
|
||||
delay: 5s
|
||||
window: 120s
|
||||
placement:
|
||||
constraints: [node.role == worker]
|
||||
logging:
|
||||
driver: "journald"
|
||||
|
||||
conductor-server:
|
||||
environment:
|
||||
- CONFIG_PROP=config.properties
|
||||
image: "nubisware/conductor-server3:dev"
|
||||
networks:
|
||||
- conductor-network
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
restart_policy:
|
||||
condition: any
|
||||
delay: 5s
|
||||
window: 120s
|
||||
placement:
|
||||
constraints: [node.role == worker]
|
||||
logging:
|
||||
driver: "journald"
|
||||
|
||||
pep:
|
||||
image: "nubisware/conductor-frontend:dev"
|
||||
networks:
|
||||
- conductor-network
|
||||
- haproxy-public
|
||||
deploy:
|
||||
mode: replicated
|
||||
endpoint_mode: dnsrr
|
||||
replicas: 1
|
||||
restart_policy:
|
||||
condition: any
|
||||
delay: 5s
|
||||
window: 120s
|
||||
placement:
|
||||
constraints: [node.role == worker]
|
||||
|
||||
environment:
|
||||
pep_credentials: ${pep_credentials}
|
||||
|
||||
workers:
|
||||
environment:
|
||||
CONDUCTOR_SERVER: http://conductor-server:8080/api/
|
||||
CONDUCTOR_HEALTH: http://conductor-server:8080/health
|
||||
worker_plugins: "Shell Eval Mail HttpBridge"
|
||||
smtp_pass: ${smtp_pass}
|
||||
smtp_user: ${smtp_user}
|
||||
image: 'nubisware/nubisware-conductor-worker-py-d4s'
|
||||
networks:
|
||||
- conductor-network
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 2
|
||||
restart_policy:
|
||||
condition: any
|
||||
delay: 5s
|
||||
window: 120s
|
||||
logging:
|
||||
driver: "journald"
|
||||
|
||||
pyrestworkers:
|
||||
environment:
|
||||
CONDUCTOR_SERVER: http://conductor-server:8080/api/
|
||||
CONDUCTOR_HEALTH: http://conductor-server:8080/health
|
||||
worker_plugins: Http
|
||||
image: 'nubisware/nubisware-conductor-worker-py-d4s'
|
||||
networks:
|
||||
- conductor-network
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 2
|
||||
restart_policy:
|
||||
condition: any
|
||||
delay: 5s
|
||||
window: 120s
|
||||
logging:
|
||||
driver: "journald"
|
||||
|
||||
networks:
|
||||
conductor-network:
|
||||
haproxy-public:
|
||||
external: true
|
||||
|
||||
volumes:
|
||||
pg_db_data:
|
||||
driver: local
|
||||
driver_opts:
|
||||
type: nfs4
|
||||
o: "nfsvers=4,addr=146.48.123.250,rw"
|
||||
device: ":/nfs/conductor_pg_dev"
|
||||
es_data:
|
||||
driver: local
|
||||
driver_opts:
|
||||
type: nfs4
|
||||
o: "nfsvers=4,addr=146.48.123.250,rw"
|
||||
device: ":/nfs/conductor_es_dev"
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue