first share

This commit is contained in:
dcore94 2023-05-03 09:56:53 +02:00
parent 8259345ff1
commit b1197e467f
12 changed files with 578 additions and 0 deletions

20
Makefile Normal file
View File

@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

35
make.bat Normal file
View File

@ -0,0 +1,35 @@
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=source
set BUILDDIR=build
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.https://www.sphinx-doc.org/
exit /b 1
)
if "%1" == "" goto help
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
:end
popd

28
source/conf.py Normal file
View File

@ -0,0 +1,28 @@
# Configuration file for the Sphinx documentation builder.
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
project = 'CCP'
copyright = '2023, Marco Lettere'
author = 'Marco Lettere'
release = '0.1.0'
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = []
templates_path = ['_templates']
exclude_patterns = []
# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
html_theme = 'alabaster'
html_static_path = ['_static']

View File

@ -0,0 +1,2 @@
Developer manual
================

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

23
source/index.rst Normal file
View File

@ -0,0 +1,23 @@
.. CCP Documentation documentation master file, created by
sphinx-quickstart on Thu Apr 27 09:47:54 2023.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to CCP's documentation!
=============================================
.. toctree::
:maxdepth: 2
:caption: Contents:
introduction
usermanual/index
developermanual/index
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

53
source/introduction.rst Normal file
View File

@ -0,0 +1,53 @@
Introduction
============
About
-----
CCP is the D4Science Cloud Computing Platform built upon the experience of the previous Dataminer initiative <https://en.wikipedia.org/wiki/D4Science> and uses a novel approach based on containerization, REST APIs and Json.
Several fields of ICT have experienced a major evolution during the last decade and many new advances, such as the widespread adoption of microservice development patterns. This resulted in substantial improvements in terms of interoperability and composability of software artefacts.
The vast landscape of new opportunities, in addition to the greatly increased requirements and expectations, have been the drivers for the design and development of a new Cloud Computing Platform that represents the result of a global rethink of the Data Miner.
Architecture
------------
A logical vision of the CCP architecutre is depicted in the following Figure.
.. figure:: images/logicalvisionarchitecture.png
:alt: Logical vision of architecture
The CCP logical vision of architecture
In this vision, CCP is a layered set of components starting at the bottom with the **Infrastructure** layer, encompassing components such as hardware, Virtual Machines, container based clusters, storage facilities and networks.
The **Runtimes** layer offers a set of prebuilt, preconfigured execution environments such as containers or Virtual Machine images.
The **Method** layer contains specification of computational methods that can be anything, from social mining algorithms to AI classifiers and data harvesters. Data scientists with development skills are encouraged to develop new Methods or cloning existing ones, being their responsibility to choose compatible runtimes or propose new ones to be integrated. Tools for sharing the Methods with communities such as Virtual Research Environments are made available at this layer.
The overall user community works at the **Workbench** layer, which is the abstraction of overarching tools that are able to directly use the available Methods, compose them into workflows and integrate them into visual tools, such as Jupyter Notebook, R scripts, Knime workflows.
**Experimentation** is the term that defines the activity of configuring new Runtimes, defining new Methods and using them in the Workbench.
In the opposite direction, **Consolidation** represents the possibility to transform dynamic objects into more static ones in order to improve reusability, portability and overall performance. For instance, workflows or combinations of Methods could be transformed into Methods themselves or even Methods into Runtimes.
The logical architecture presented in Figure 9 shows the natively distributed nature of CCP.
.. figure:: images/logicalarchitecture.png
:alt: Logical architecture
The CCP logical architecture
Starting from the top, **Infrastructures** (as computing resources that will host CCP method executions, i.e. anything from simple laptops up to clusters of server Hosts) can be connected as runtime execution environments by installing a Controller component. Within an infrastructure, Hosts are computational nodes like physical or virtual servers. They are delegated to execute methods.
**Controllers** are processes that communicate through a specific API with the CCP in order to poll for tasks to perform on the Infrastructure they control. Tasks may include deploying and running methods, cleaning up executions, reporting on the overall status of the Infrastructure.
In order to keep the current state for CCP, a couple of registries are involved. Specifically, the Method & Execution Registry and the Infrastructure and Runtime registry.
User driven visual components are available to manage Infrastructures, Runtimes, Methods and Executions at the frontend. Those components are identified by the user icon in the previous Figure.
Because many of the operations involved are lengthy and asynchronous, CCP includes a Logging Service that is used to send back realtime notifications about the state of a particular process to the user. These notifications include advancement of Executions, advertisement of Infrastructures status updates, and error conditions.
All complex processes involved in CCP are implemented as workflows inside a Workflow Orchestrator which, in addition to granting a high level of flexibility and customisation, allows for a centralised endpoint to monitor progress and check for errors that may occur.
At the basis of all interactions among external actors, such as users and Controllers, a strong authentication and authorisation mechanism is enforced by an Identity and Access Management software (IAM). This enables it to address security requirements as well as to implement ownership attribution and auditing.

417
source/usermanual/index.rst Normal file
View File

@ -0,0 +1,417 @@
User manual
===========
This manual documents the features of CCP (Cloud Computing Platform) available to scientists that want to design, integrate and execute their methods.
Infrastructures
---------------
Infrastructures are technological platforms on top of whom Methods are executed. They are identified by a unique id, a name, a description and a type.
The type defines also the encapsulation or containerisation technology used for the runtimes.
Currently CCP supports single Docker containers, Docker Swarm based clusters and LXD clusters.
It is responisibility of a Method developers to decide what infrastructure their Methods will be executed on and this restricts the type of Runtimes that can be selected.
Runtimes
--------
Runtimes are containers that encapsulate a Method. They can be seen as minimal virtual environments made of an operating system and all dependencies required by a particular Method.
The technology and the list of available Runtimes is strictly related to type of the Infrastructure. In an Infrastructure of type Docker or Docker Swarm cluster the available Runtimes are listed in a registry of Docker containers for example.
Methods
-------
Logically, Methods are computational functions or procedures. They can be implementations of algorithms or numerical recipes. data gatherings or transformations, AI modules, generation of visuals and charts. Whatever can be executed and produces a valuable scientific result that needs to be reproduceable and repeatable can be written as a Method.
CCP tries to be as lax as possible with respect to the technical constraints for Methods. It aims at supporting every language and every reasonable combination of operating systems and dependencies by providing stacks of Runtimes that provide many ready solutions but are simultaneously open to customisations.
Anatomy of a Method
~~~~~~~~~~~~~~~~~~~
At its heart a Method is a JSON structure that aggregates a section of metadata, the definition of input parameters, the description of expected outputs, instructions for customising the deploy and execute steps of its lifecycle and link to a compatible Infrastructure.
The syntax of the JSON data structure is constrained by the grammar proposed by the OGC Processes API specification (<https://ogcapi.ogc.org/processes/>).
The following code snippet illustrates an example.
.. code:: json
{
"id":"408d9dc5-ee37-4123-9f07-4294f13bce19",
"title":"JDK-8 Example maven",
"description":"Test for executing a jdk8 sample app from GitHub repository built with maven",
"version":"0.0.1",
"jobControlOptions":[
"async-execute"
],
"keywords":[
"jdk", "java", "jdk8", "java8", "maven"
],
"metadata":[
{
"title":"Marco Lettere",
"role":"author",
"href":"https://accounts.dev.d4science.org/auth/admin/realms/d4science/users/88c76e47-5881-4716-a2bf-02d3b4073574"
},
{
"role":"category",
"title":"Test"
},
{
"title":"%2Fgcube%2Fdevsec%2FCCP",
"role":"context",
"href":"https://accounts.dev.d4science.org/auth/admin/realms/d4science/clients/%2Fgcube%2Fdevsec%2FCCP"
}
],
"outputTransmission":[
"value"
],
"inputs":{
"ccpimage":{
"id":"ccpimage",
"title":"Runtime",
"description":"The image of the runtime to use for method execution. This depends on the infrastructure specific protocol for interacting with registries.",
"minOccurs":1,
"maxOccurs":1,
"schema":{
"type":"string",
"format":"url",
"contentMediaType":"text/plain",
"default":"nubisware/ccp-jdk8-jammy:latest",
"readonly":"true"
}
},
"repository":{
"id":"repository",
"title":"Repository URL",
"description":"Git url to repository",
"minOccurs":1,
"maxOccurs":1,
"schema":{
"type":"string",
"format":"url",
"default":"https://github.com/dcore94/jdk-maven-example"
}
},
"mainclass":{
"id":"mainclass",
"title":"Main Class",
"description":"The main class to run",
"minOccurs":1,
"maxOccurs":1,
"schema":{
"type":"string",
"default":"example.HelloWorld"
}
}
},
"outputs":{
"filetext":{
"id":"filetext",
"title":"Text output",
"description":"Some output is written in txt format to file.txt",
"minOccurs":1,
"maxOccurs":1,
"metadata":[
{
"title":"file.txt",
"role":"file",
"href":"/output/file.txt"
}
],
"schema":{
"type":"string",
"contentMediaType":"text/plain"
}
},
"filexml":{
"id":"filexml",
"title":"XML output",
"description":"Some output is written in XML format to file.xml",
"minOccurs":1,
"maxOccurs":1,
"metadata":[
{
"title":"file.xml",
"role":"file",
"href":"/ccp_data/output/file.xml"
}
],
"schema":{
"type":"string",
"contentMediaType":"application/xml"
}
},
"filejson":{
"id":"filejson",
"title":"JSON output",
"description":"Some output is written in JSON format to file.json",
"minOccurs":1,
"maxOccurs":1,
"metadata":[
{
"title":"file.json",
"role":"file",
"href":"/ccp_data/output/file.json"
}
],
"schema":{
"type":"string",
"contentMediaType":"application/json"
}
},
"filecsv":{
"id":"filecsv",
"title":"CSV output",
"description":"Some output is written in CSV format to file.csv",
"minOccurs":1,
"maxOccurs":1,
"metadata":[
{
"title":"file.csv",
"role":"file",
"href":"/output/file.csv"
}
],
"schema":{
"type":"string",
"contentMediaType":"text/csv"
}
}
},
"additionalParameters":{
"parameters":[
{
"name":"execute-script",
"value":[
"cd execution",
"mkdir -p /ccp_data/output",
"java -cp target/jdk-maven-example-0.0.1-SNAPSHOT.jar {{ mainclass }} 1>> /ccp_data/stdout.txt 2>> /ccp_data/stderr.txt",
"cp /tmp/file.* /ccp_data/output/"
]
},
{
"name":"deploy-script",
"value":[
"git clone {{ repository }} execution 1>> /ccp_data/stdout.txt 2>> /ccp_data/stderr.txt",
"cd execution",
"mvn clean package 1> /ccp_data/stdout.txt 2>> /ccp_data/stderr.txt",
"cd -"
]
},
{
"name":"undeploy-script",
"value":[]
},
{
"name":"cancel-script",
"value":[]
}
]
},
"links":[
{
"href":"infrastructures/nubisware-docker-swarm-nfs",
"rel":"compatibleWith",
"title":"Docker swarm with NFS on Nubis cluster"
}
]
}
This is an example of a Method that executes Java 8 code rooted at a main class *example.HelloWorld* and cloned from a public GitHub reposiotry. The code is built with Maven.
The keywords section contains keywords that help in searching for the Method. The metadata fields author and context show what user has created the descriptor for the Method and in which context. Methods can also contain several category metadata items that help in classifying the Method.
jobControlOptions is hardocded to "async-execute" because CCP always executes Methods in an asynchronous way.
In the example above the Method has three inputs.
**ccpimage** is required to appear exactly once. This input is always required for every Method that will be executed on CCP. It refers to the Runtime required for the Method execution. The input is a plain text string representing the reference to a container image matching the requirements of the Infrastructure. Since the example is compatible with a Docker based Infrastructure the reference is a name in Docker form *repository/image:versiontag*. This input is readonly because the default value provided at Method definition time is constrained and not editable.
**repository** is the URL to the Git repository to be cloned. It defaults to an example project but can be edited.
**mainclass** is the main class of the Java application.
The Method declares four example output files encoded as XML, JSON, CSV or plain text. As will be shown later, a Method is not required to return only wht it declares as outputs. The output declaration is used mainly for semantically enrich an output.
The additionalParameters section encodes the three scripts governing the Method's lifecycle. The lifecycle of a Method will be described in the following section. In this example, the deploy scripts takes care of cloning a Git repository passed as input parameter "reposiotry" into a target folder and build the code using Maven. The execute script builds a folder named "output", the launches the main class of the Java application and finally copies the output files (which are created in the /tmp directory by the example Java code) to the output folder. The undeploy and cancel scripts are actually no-operations because they rely on the fact that in an environment based on containers, the clean-up operations are intrinsic.
It is important to note that all inputs declared for the Method can be used as variables in the scripts by putting their id in double curly brackets. There are other variables that can be useed in addition to outputs and they will be discussed in section "Execution context of a Method".
The links section encodes the link to the Infrastructure that is declared to be compatible with the Method.
Lifecycle of a Method execution
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following Figure depicts what happens when the execution of a Method is requested by a user either by interfacing with a GUI widget of the CCP or by invoking the REST API.
.. figure:: /images/statemachine.png
:alt: Lifecycle of Method execution
Lifecycle of Method execution
The message carrying the execution request is sent to CCP and the execution starts. The first task puts the execution in *Launch* state. During this phase the Runtime for the execution is prepared. On a container based Infrastructure this usually resolves to using the *ccpimage* input parameter in order to fetch the container image from a reposiory and instantiatiate the container.
After the transition to the the Launch state, like for every other transition, the outcome of the operation is evaluated and in case of errors the process terminates by transitioning directly to the *Destroy* state thus ensuring that the infrastructure is cleaned up.
After a successful *Launch*, the Method execution moves into *Deploy* state. As shown by the script task decorator, this task is scripted meaning that by default it's a no-operation and the commands to be performed are supplied by the creator of the Method at definition time through the *deploy-script* attribute. Example operations that could occur during this phase in a deploy script are: fetching of code on Git repositories, installation of fine grained dependencies (for example *pip install -r requirements.txt*), building of code, downloading of resource files.
From the *Deploy* phase a Method execution enters the *Execute* phase. Like for the *Deploy* phase what exactly happens during this phase is determined by the *execute-script* provided by the Method creator at Method definition time. Instructions in the execute-script usually contain invocation of main code components.
The time spent in the *Execute* phase is limited by the Infrastructure. It is up the the Infrastructure manager to define what is the maximum amount of time allowed for Method execution. If the method allows it, the execution time can be futher limited by the user requesting the execution of a Method, by setting the *ccpmaxtime* input parameter.
The *Fetch* following a successful *Execute* phase is a non scriptable transition in charge of uploading the outputs of a Method execution to the Execution storage.
The following *Undeploy* phase can be used by Method developers to perform operations after the Method execution has terminated. This phase is not thought to be a cleanup task because on containerised Infrastructures the system takes autonoumously care of destroying resources at the end of a Method execution. Instead it could be used to perform extra work like notifying external systems or sharing outputs.
Finally, the *Destroy* phase is the time where the Infrastructure controller literally destroys the Runtime of the execution and all resources created during the previous phases.
Execution context of a Method
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
During the execution of *deploy-script*, *execute-script* and *undeploy-script* as well as during method execution it is possible to access information that is contextual to CCP, Method or Execution request.
Some information are accessible as template variables that can be used in the scripts with the *{{ var }}* syntax. Other useful information is passed directly to the Method execution as environmental varibales that can be accessed with the proper APIS that every programming language supports.
All **input parameters** are passed in the form of template variables to the scripts. This allows the script to adapt the input values to the requirements of the Method (using them directly passing as commandline arguments, setting as environmental variables, or writing to files).
There are few input parameters that are used to govern the Method execution itself rather than providing input to the Method. In particular:
- **ccpimage** as already told is required and it is used automatically during *Launch* phase in order to instantiate a container.
- **ccpmaxtime** can be used to limit the maximum execution time of a Method. The value is expressed in seconds and it is capped by the maximum time configured for the Infrastructure.
- **ccpreplicas** currently supported on Docker swarm based Infrastructures allows for creating multiple instances of a Method execution in order to obtain a coarse grained degree of parallelism.
A set of **environmental variables** is passed to the Runtime inside of which the Method is executed in order to provide additional context.
The following two environmental valuables provide context for the execution.
- **ccptaskname** is the id of the execution.
- **ccptaskid** is the index of the replica (1-based) when multiple replicas are requested with the input parameter ccpreplicas. This can be used to customise the behavior of a replica like accessing a slice of a dataset or writing output to different files.
The following variables are related to the authentication and authorization context of the Method execution. They can be used to access D4Science services in a secure and convenient way also for very long lasting executions.
.. code-block:: bash
:caption: How to use *ccptaskid* to separate output of different replicas to different files in *execute-script*.
mkdir -p /ccp_data/output && echo $RANDOM >> /ccp_data/output/`printenv ccptaskid`.txt
- **ccpiamurl** is the URL of the Identity management service.
- **ccpclientid** is the client_id to be used for requesting token renewal.
- **ccprefreshtoken** is a refresh token by which new access tokens can be requested.
- **ccpcontext** represents the context (VO or VRE) in which the Method execution has been requested.
As an example the following Python code shows how to use the variables to request a token renewal.
.. code-block:: python
:caption: How to request a login token and an UMA token for accessing D4Science service from inside Method code
#Get auth info from env variables
refreshtoken = os.environ["ccprefreshtoken"]
context = os.environ["ccpcontext"]
clientid = os.environ["ccpclientid"]
iam = os.environ["ccpiamurl"]
#Auth related functions
logindata = { 'grant_type' : 'refresh_token', 'client_id' : clientid, 'refresh_token' : refreshtoken}
loginheaders = { "Accept" : "application/json", "Content-Type" : "application/x-www-form-urlencoded"}
umadata = { 'grant_type' : 'urn:ietf:params:oauth:grant-type:uma-ticket', 'audience' : context}
umaheaders = { "Accept" : "application/json", "Content-Type" : "application/x-www-form-urlencoded"}
def getToken():
# login with offline_token
resp1 = requests.post(iam, data=logindata, headers=loginheaders)
jwt = resp1.json()
#get UMA token for context
umaheaders["Authorization"] = "Bearer " + jwt["access_token"]
resp2 = requests.post(iam, data=umadata, headers=umaheaders)
return resp2.json()["access_token"]
# Get valid token for context
tok = getToken()
# List VRE fodler content
vrefolder = requests.get(workspace + "/vrefolder", headers={"Accept" : "application/json", "Authorization" : "Bearer " + tok}).json()
A special folder is provided in the Runtime to a Method execution for storing output files. Files are currently the only way for a Method to output results by value. The folder is named **/ccp_data** and all the files written to this folder are returned in the context of the Execution as a zip archive.
Executions
----------
An **Execution** represents the instatiation of a Method through a Request. The Request carries values for the inputs declared by the Method and a list of expected outputs. A dedicated data structure is created in the Execution repository as soon as an Execution request is accepted. The data structure acts as a folder that collects the request, all the outputs and all metadata generated in order to execute a Method.
An Execution can be exported, imported and re-executed through the CCP GUI widgets or through REST API calls.
Anatomy of an Execution
~~~~~~~~~~~~~~~~~~~~~~~
The datastructure of an Execution is meant to be as atomic and as explicative as possible.
The following is a representation of the data structure representing an Execution.
- **metadata**
- **request.json** # The JSON message that requested the Execution
- **method.json** # The JSON Method descriptor
- **infrastructure.json** # The JSON Infrastructure descriptor
- **instance.json** # The JSON descriptor of the container that played as the Runtime of the Execution
- **auth**
- **jwt.json** # Authorization information of the user requesting the Execution
- **outputs**
- **output.zip** # Zip archive of all output files and folders.
Method and Execution storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In ordder to be able to execute a Method or to operate on an Execution they need to be kept inside the **Workbench**. The Workbench can be seen as a sort of short lived storage area that resides closely to the CCP core components.
A Method that is on the Workbench can be cloned, edited, executed, deleted, exported or archived to a offline storage area. The offline storage area is a dedicated folder CCP/methods on a user's D4Science workspace. When exported to file or archive to the worksapce a MEthod is a JSON file named like the Method's title and version.
An Execution that is on the workbench can be browsed, downloaded, re-executed, deleted or archived to an offline storage area which resides in a dedicated folder CCP/executions on a user's workspace. When downloaded to a file or archived to the workspace, an Execution is a zip archive containing a compressed binary with the structure described in the previous section.
Methods and Executions can be reimported to the Workbench Beither by uploading exported files or passing the "sharable links" obtained from the workspace.
UI Widgets
----------
A set of graphical user interaction (GUI) widgets are provided in order to allow a user to interact from browser based applications with Methods and Executions stored in the Workbench.
Method list
~~~~~~~~~~~
The *Method list* widget is a visual representantion of the list of Methods that a user is able to access in a given context either because he/she is the owner or because they are shared in the context.
The following Figure shows an example visualization of the Method list.
.. figure:: /images/methodlistwidget.png
:alt: Method list widget
Method list widget
The Method list widget is comprised of a toolbar, a search field and the list of Methods. The Methods are organized by categories as shown in [1]. For every Method the title, version, author and description are reported in the first two lines [2]. As additional information tags and compatible Infrastructure are shown [3]. There is the possibility to download a Method or a whole class and to see how many of the shown Methods are executable [4]. Methods can be not executable if their compatible Infrastrucure is not known or available. A per MEthod tollbar [5] allows to download, edit or execute a Method for. From the global toolbar it is possible to refresh the list or upload a Method from a file [6] and also to reimport an archived Method from the workspace by copying and pasting the shareable link into the proper field and clicking on the button [7].
Method editor
~~~~~~~~~~~~~
The *Method editor* widget is a visual tool for creating, deleting, editing or cloning a MEthod descriptor.
The following Figure shows an example visualization of the Method editor.
.. figure:: /images/methodeditorwidget.png
:alt: Method editor widget
Method editor widget
From the global toolbar [1] it is possible to save the edited Method or delete it or clear all the form fields. The metadata area [2] contains the controls to define all the metadata of a Method including title, version, description, tags, categories. It is also possible to choose a compatible Infrastructure from the available ones. In the input definition area [3] the user can define all the input parameters with their type, format, encoding, cardinality and default values. In the output definition area [4] the user can define all the output files that can be expected from an Execution with their type, format, encoding and cardinality. Shortcuts are added to define with one click the standard output and error chnannels of a Method. In the scripting area [5] the deploy, execute and undeploy script can be defined.
Once the Method is saved the user will automatically be added as the Author. If the user defines the method to be public, the Method is made available to all members of the context (VO or VRE) in which the operation occurred.
Method execution form
~~~~~~~~~~~~~~~~~~~~~
Execution monitor
~~~~~~~~~~~~~~~~~
REST APIs: Interacting with Methods and Executions programmatically
-------------------------------------------------------------------