forked from lsmyrnaios/UrlsController
- Add the "getWorkersInfo" endpoint.
- Improve startup speed, by using a faster remote server to get the host's machine public IP. This also reduces the risk of not being able to get the public IP at all. - Fix the detection of a different IP for a known worker. - Improve documentation.
This commit is contained in:
parent
5f75b48e95
commit
c7bfd75973
15
README.md
15
README.md
|
@ -2,17 +2,20 @@
|
|||
|
||||
The Controller's Application receives requests coming from the [Workers](https://code-repo.d4science.org/lsmyrnaios/UrlsWorker) , constructs an assignments-list with data received from a database and returns the list to the workers.<br>
|
||||
Then, it receives the "WorkerReports", it requests the full-texts from the workers, in batches, and uploads them on the S3-Object-Store. Finally, it writes the related reports, along with the updated file-locations into the database.<br>
|
||||
The database used is the [Impala](https://impala.apache.org/).<br>
|
||||
<br>
|
||||
It can also process **Bulk-Import** requests, from compatible data sources, in which case it receives the full-text files immediately, without offloading crawling jobs to Workers.<br>
|
||||
<br>
|
||||
For interacting with the database we use [Impala](https://impala.apache.org/).<br>
|
||||
<br>
|
||||
|
||||
**Statistics API**:
|
||||
- "**getNumberOfAllPayloads**" endpoint: **http://<IP>:<PORT>/api/stats/getNumberOfAllPayloads** <br>
|
||||
- "**getNumberOfAllPayloads**" endpoint: **http://\<IP\>:<PORT>/api/stats/getNumberOfAllPayloads** <br>
|
||||
This endpoint returns the total number of payloads existing in the database, independently of the way they were aggregated. This includes the payloads created by other pieces of software, before the PDF-Aggregation-Service was created.
|
||||
- "**getNumberOfPayloadsAggregatedByService**" endpoint: **http://<IP>:<PORT>/api/stats/getNumberOfPayloadsAggregatedByService** <br>
|
||||
- "**getNumberOfPayloadsAggregatedByService**" endpoint: **http://\<IP\>:<PORT>/api/stats/getNumberOfPayloadsAggregatedByService** <br>
|
||||
This endpoint returns the number of payloads aggregated by the PDF-Aggregated-Service itself. It excludes the payloads aggregated by other methods, by applying a Date-filter for the records created in 2021 or later.
|
||||
- "**getNumberOfPayloadsForDatasource**" endpoint: **http://<IP>:<PORT>/api/stats/getNumberOfPayloadsForDatasource?datasourceId="givenDatasourceId"** <br>
|
||||
- "**getNumberOfPayloadsForDatasource**" endpoint: **http://\<IP\>:<PORT>/api/stats/getNumberOfPayloadsForDatasource?datasourceId=\<givenDatasourceId\>** <br>
|
||||
This endpoint returns the number of payloads which belong to the datasource specified by the given datasourceID.
|
||||
- "**getNumberOfRecordsInspected**" endpoint: **http://<IP>:<PORT>/api/stats/getNumberOfRecordsInspected** <br>
|
||||
- "**getNumberOfRecordsInspected**" endpoint: **http://\<IP\>:<PORT>/api/stats/getNumberOfRecordsInspected** <br>
|
||||
This endpoint returns the number of records inspected by the PDF-Aggregation-Service.
|
||||
<br>
|
||||
<br>
|
||||
|
@ -27,4 +30,4 @@ If you want to build and run the app on a **Docker Container**, then run the scr
|
|||
|
||||
Implementation notes:
|
||||
- For transferring the full-text files, we use Facebook's [**Zstandard**](https://facebook.github.io/zstd/) compression algorithm, which brings very big benefits in compression rate and speed.
|
||||
- The uploaded full-text files follow this naming-scheme: "**datasourceID/recordId::fileHash.pdf**"
|
||||
- The uploaded full-text files follow this naming-scheme: "**datasourceID/recordID::fileHash.pdf**"
|
||||
|
|
|
@ -20,4 +20,14 @@ public class GeneralController {
|
|||
return ResponseEntity.ok().build();
|
||||
}
|
||||
|
||||
|
||||
@GetMapping("getWorkersInfo")
|
||||
public ResponseEntity<?> getWorkersInfo()
|
||||
{
|
||||
if ( UrlsController.workersInfoMap.isEmpty() )
|
||||
return ResponseEntity.noContent().build();
|
||||
|
||||
return ResponseEntity.ok(UrlsController.workersInfoMap);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -36,7 +36,6 @@ public class UrlsController {
|
|||
@Value("${services.pdfaggregation.controller.assignmentLimit}")
|
||||
private int assignmentLimit;
|
||||
|
||||
|
||||
public static final ConcurrentHashMap<String, WorkerInfo> workersInfoMap = new ConcurrentHashMap<>(6);
|
||||
|
||||
|
||||
|
@ -88,7 +87,7 @@ public class UrlsController {
|
|||
WorkerInfo workerInfo = workersInfoMap.get(workerId);
|
||||
if ( workerInfo != null ) { // This worker has already been identified.
|
||||
String savedWorkerIp = workerInfo.getWorkerIP();
|
||||
if ( savedWorkerIp.equals(remoteAddr) ) {
|
||||
if ( !savedWorkerIp.equals(remoteAddr) ) {
|
||||
logger.warn("The worker with id \"" + workerId + "\" has changed IP from \"" + savedWorkerIp + "\" to \"" + remoteAddr + "\".");
|
||||
workerInfo.setWorkerIP(remoteAddr); // Set the new IP. The update will be reflected in the map.
|
||||
} // In this case, the worker may has previously informed the Controller it has shutdown or it may have crashed.
|
||||
|
|
|
@ -55,7 +55,7 @@ public class UriBuilder {
|
|||
{
|
||||
String publicIpAddress = "";
|
||||
HttpURLConnection conn = null;
|
||||
String urlString = "https://api.ipify.org/";
|
||||
String urlString = "https://checkip.amazonaws.com/";
|
||||
try {
|
||||
conn = (HttpURLConnection) new URL(urlString).openConnection();
|
||||
conn.setConnectTimeout(60_000); // 1 minute
|
||||
|
|
Loading…
Reference in New Issue