graphInputPath the path where the graph is stored workingPath the path where the the generated data will be stored datasourceIdWhitelist - a white list (comma separeted, - for empty list) of datasource ids datasourceTypeWhitelist - a white list (comma separeted, - for empty list) of datasource types datasourceIdBlacklist - a black list (comma separeted, - for empty list) of datasource ids esEventIndexName the elasticsearch index name for events esNotificationsIndexName the elasticsearch index name for notifications esIndexHost the elasticsearch host maxIndexedEventsForDsAndTopic the max number of events for each couple (ds/topic) brokerApiBaseUrl the url of the broker service api brokerDbUrl the url of the broker database brokerDbUser the user of the broker database brokerDbPassword the password of the broker database sparkDriverMemory memory for driver process sparkExecutorMemory memory for individual executor sparkExecutorCores number of cores used by single executor oozieActionShareLibForSpark2 oozie action sharelib for spark 2.* spark2ExtraListeners com.cloudera.spark.lineage.NavigatorAppListener spark 2.* extra listeners classname spark2SqlQueryExecutionListeners com.cloudera.spark.lineage.NavigatorQueryListener spark 2.* sql query execution listeners classname spark2YarnHistoryServerAddress spark 2.* yarn history server address spark2EventLogDir spark 2.* event log dir location ${jobTracker} ${nameNode} mapreduce.job.queuename ${queueName} oozie.launcher.mapred.job.queue.name ${oozieLauncherQueueName} oozie.action.sharelib.for.spark ${oozieActionShareLibForSpark2} Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}] yarn cluster PartitionEventsByDsIdJob eu.dnetlib.dhp.broker.oa.PartitionEventsByDsIdJob dhp-broker-events-${projectVersion}.jar --executor-cores=${sparkExecutorCores} --executor-memory=${sparkExecutorMemory} --driver-memory=${sparkDriverMemory} --conf spark.extraListeners=${spark2ExtraListeners} --conf spark.sql.queryExecutionListeners=${spark2SqlQueryExecutionListeners} --conf spark.yarn.historyServer.address=${spark2YarnHistoryServerAddress} --conf spark.eventLog.dir=${nameNode}${spark2EventLogDir} --conf spark.sql.shuffle.partitions=3840 --graphPath${graphInputPath} --workingPath${workingPath}