forked from D-Net/openaire-graph-docs
deduplication section revised, decision trees for research products added
This commit is contained in:
parent
58c89d71da
commit
c9cafec4da
|
@ -3,18 +3,91 @@ sidebar_position: 3
|
|||
---
|
||||
# Clustering functions
|
||||
|
||||
## Ngrams
|
||||
|
||||
It creates ngrams from the input field. <br />
|
||||
```
|
||||
Example:
|
||||
Input string: “Search for the Standard Model Higgs Boson”
|
||||
Parameters: ngram length = 3, maximum number = 4
|
||||
List of ngrams: “sea”, “sta”, “mod”, “hig”
|
||||
```
|
||||
|
||||
## NgramPairs
|
||||
|
||||
It produces a list of concatenations of a pair of ngrams generated from different words.<br />
|
||||
*Example:*<br />
|
||||
Input string: `“Search for the Standard Model Higgs Boson”`<br />
|
||||
Parameters: ngram length = 3<br />
|
||||
List of ngrams: `“sea”`, `“sta”`, `“mod”`, `“hig”`<br />
|
||||
Ngram pairs: `“seasta”`, `“stamod”`, `“modhig”`
|
||||
```
|
||||
Example:
|
||||
Input string: “Search for the Standard Model Higgs Boson”
|
||||
Parameters: ngram length = 3
|
||||
Ngram pairs: “seasta”, “stamod”, “modhig”
|
||||
```
|
||||
|
||||
## SuffixPrefix
|
||||
|
||||
It produces ngrams pairs in a particular way: it concatenates the suffix of a string with the prefix of the next in the input string.<br />
|
||||
*Example:*<br />
|
||||
Input string: `“Search for the Standard Model Higgs Boson”`<br />
|
||||
Parameters: suffix and prefix length = 3<br />
|
||||
Output list: `“ardmod”` (suffix of the word `“Standard”` + prefix of the word `“Model”`), `“rchsta”` (suffix of the word `“Search”` + prefix of the word `“Standard”`)
|
||||
It produces ngrams pairs in a particular way: it concatenates the suffix of a string with the prefix of the next in the input string. A specialization of this function is available as SortedSuffixPrefix. It returns a sorted list. <br />
|
||||
```
|
||||
Example:
|
||||
Input string: “Search for the Standard Model Higgs Boson”
|
||||
Parameters: suffix and prefix length = 3, maximum number = 2
|
||||
Output list: “ardmod”` (suffix of the word “Standard” + prefix of the word “Model”), “rchsta” (suffix of the word “Search” + prefix of the word “Standard”)
|
||||
```
|
||||
|
||||
## Acronyms
|
||||
|
||||
It creates a number of acronyms out of the words in the input field. <br />
|
||||
```
|
||||
Example:
|
||||
Input string: “Search for the Standard Model Higgs Boson”
|
||||
Output: "ssmhb"
|
||||
```
|
||||
|
||||
## KeywordsClustering
|
||||
|
||||
It creates keys by extracting keywords, out of a customizable list, from the input field. <br />
|
||||
```
|
||||
Example:
|
||||
Input string: “University of Pisa”
|
||||
Output: "key::001" (code that identifies the keyword "University" in the customizable list)
|
||||
```
|
||||
|
||||
## LowercaseClustering
|
||||
|
||||
It creates keys by lowercasing the input field. <br />
|
||||
```
|
||||
Example:
|
||||
Input string: “10.001/ABCD”
|
||||
Output: "10.001/abcd"
|
||||
```
|
||||
|
||||
## RandomClusteringFunction
|
||||
|
||||
It creates random keys from the input field. <br />
|
||||
|
||||
## SpaceTrimmingFieldValue
|
||||
|
||||
It creates keys by trimming spaces in the input field. <br />
|
||||
```
|
||||
Example:
|
||||
Input string: “Search for the Standard Model Higgs Boson”
|
||||
Output: "searchstandardmodelhiggsboson"
|
||||
```
|
||||
|
||||
## UrlClustering
|
||||
|
||||
It creates keys for an URL field by extracting the domain. <br />
|
||||
```
|
||||
Example:
|
||||
Input string: “http://www.google.it/page”
|
||||
Output: "www.google.it"
|
||||
```
|
||||
|
||||
## WordsStatsSuffixPrefixChain
|
||||
|
||||
It creates keys containing concatenated statistics of the field, i.e. number of words, number of letters and a chain of suffixes and prefixes of the words. <br />
|
||||
```
|
||||
Example:
|
||||
Input string: “Search for the Standard Model Higgs Boson”
|
||||
Parameters: mod = 10
|
||||
Output list: "5-3-seaardmod" (number of words + number of letters % 10 + prefix of the word "Search" + suffix of the word "Standard" + prefix of the word "Model"), "5-3-rchstadel" (number of words + number of letters % 10 + suffix of the word "Search" + prefix of the word "Standard" + suffix of the word "Model")
|
||||
```
|
|
@ -1,29 +1,28 @@
|
|||
# Deduplication
|
||||
|
||||
## Clustering
|
||||
Metadata records about the same scholarly work can be collected from different providers. Each metadata record can possibly carry different information because, for example, some providers are not aware of links to projects, keywords or other details. Another common case is when OpenAIRE collects one metadata record from a repository about a pre-print and another record from a journal about the published article. For the provision of statistics, OpenAIRE must identify those cases and “merge” the two metadata records, so that the scholarly work is counted only once in the statistics OpenAIRE produces.
|
||||
|
||||
Clustering is a common heuristics used to overcome the N x N complexity required to match all pairs of objects to identify the equivalent ones. The challenge is to identify a clustering function that maximizes the chance of comparing only records that may lead to a match, while minimizing the number of records that will not be matched while being equivalent. Since the equivalence function is to some level tolerant to minimal errors (e.g. switching of characters in the title, or minimal difference in letters), we need this function to be not too precise (e.g. a hash of the title), but also not too flexible (e.g. random ngrams of the title). On the other hand, reality tells us that in some cases equality of two records can only be determined by their PIDs (e.g. DOI) as the metadata properties are very different across different versions and no clustering function will ever bring them into the same cluster. To match these requirements OpenAIRE clustering for products works with two functions:
|
||||
* DOI: the function generates the DOI when this is provided as part of the record properties;
|
||||
* Title-based function: the function generates a key that depends on (i) number of significant words in the title (normalized, stemming, etc.), (ii) module 10 of the number of characters of such words, and (iii) a string obtained as an alternation of the function prefix(3) and suffix(3) (and vice versa) o the first 3 words (2 words if the title only has 2). For example, the title “Entity deduplication in big data graphs for scholarly communication” becomes “entity deduplication big data graphs scholarly communication” with two keys key “7.1entionbig” and “7.1itydedbig” (where 1 is module 10 of 54 characters of the normalized title.
|
||||
To give an idea, this configuration generates around 77Mi blocks, which we limited to 200 records each (only 15K blocks are affected by the cut), and entails 260Bi matches. Matches in a block are performed using a “sliding window” set to 80 records. The records are sorted lexicographically on a normalized version of their titles. The 1st record is matched against all the 80 following ones, then the second, etc. for an NlogN complexity.
|
||||
## Methodology overview
|
||||
|
||||
## Matching and election
|
||||
The deduplication process can be divided into three different phases:
|
||||
* Candidate identification (clustering)
|
||||
* Duplicates identification (pair-wise comparisons)
|
||||
* Duplicates grouping (transitive closure)
|
||||
|
||||
Once the clusters have been built, the algorithm proceeds with the comparisons. Comparisons are driven by a decisional tree that:
|
||||
1. Tries to capture equivalence via PIDs: if records share a PID then they are equivalent
|
||||
<p align="center">
|
||||
<img loading="lazy" alt="Deduplication Workflow" src="/img/docs/deduplication-workflow.png" width="100%" className="img_node_modules-@docusaurus-theme-classic-lib-theme-MDXComponents-Img-styles-module"/>
|
||||
</p>
|
||||
|
||||
2. Tries to capture difference:
|
||||
### Candidate identification (clustering)
|
||||
|
||||
a. If record titles contain different “numbers” then they are different (this rule is subject to different feelings, and should be fine-tuned);
|
||||
Clustering is a common heuristics used to overcome the N x N complexity required to match all pairs of objects to identify the equivalent ones. The challenge is to identify a [clustering function](./clustering-functions) that maximizes the chance of comparing only records that may lead to a match, while minimizing the number of records that will not be matched while being equivalent. Since the equivalence function is to some level tolerant to minimal errors (e.g. switching of characters in the title, or minimal difference in letters), we need this function to be not too precise (e.g. a hash of the title), but also not too flexible (e.g. random ngrams of the title). On the other hand, reality tells us that in some cases equality of two records can only be determined by their PIDs (e.g. DOI) as the metadata properties are very different across different versions and no [clustering function](./clustering-functions) will ever bring them into the same cluster.
|
||||
|
||||
b. If record contain different number of authors then they are different;
|
||||
### Duplicates identification (pair-wise comparisons)
|
||||
|
||||
c. Note that different PIDs do not imply different records, as different versions may have different PIDs.
|
||||
Pair-wise comparisons are conducted over records in the same cluster following the strategy defined in the decision tree. A different decision tree is adopted depending on the type of the entity being processed.
|
||||
|
||||
3. Measures equivalence:
|
||||
To further limit the number of comparisons, a sliding window mechanism is used: (i) records in the same cluster are lexicographically sorted by their title, (ii) a window of K records slides over the cluster, and (iii) records ending up in the same window are pair-wise compared. The result of each comparison produces a similarity relation when the pair of record matches. Such relations will be consequently used as input for the duplicates grouping stage.
|
||||
|
||||
a. The titles of the two records are normalised and compared for similarity by applying the Levenstein distance algorithm. The algorithm returns a number in the range [0,1], where 0 means “very different” and 1 means “equal”. If the distance is greater than or equal 0,99 the two records are identified as duplicates.
|
||||
### Duplicates grouping (transitive closure)
|
||||
|
||||
b. Dates are not regarded for equivalence matching because different versions of the same records should be merged and may be published on different dates, e.g. pre-print and published version of an article.
|
||||
|
||||
Once the equivalence relationships between pairs of records are set, the groups of equivalent records are obtained (transitive closure, i.e. “mesh”). From such sets a new representative object is obtained, which inherits all properties from the merged records and keeps track of their provenance. The ID of the record is obtained by appending the prefix “dedup_” to the MD5 of the first ID (given their lexicographical ordering). A new, more stable function to generate the ID is under development, which exploits the DOI when one of the records to be merged includes a Crossref or a DataCite record.
|
||||
Once the similarity relations between pairs of records are drawn, the groups of equivalent records are obtained (transitive closure, i.e. “mesh”). From such sets a new representative object is obtained, which inherits all properties from the merged records and keeps track of their provenance.
|
|
@ -4,48 +4,60 @@ sidebar_position: 1
|
|||
|
||||
# Research results
|
||||
|
||||
Metadata records about the same scholarly work can be collected from different providers. Each metadata record can possibly carry different information because, for example, some providers are not aware of links to projects, keywords or other details. Another common case is when OpenAIRE collects one metadata record from a repository about a pre-print and another record from a journal about the published article. For the provision of statistics, OpenAIRE must identify those cases and “merge” the two metadata records, so that the scholarly work is counted only once in the statistics OpenAIRE produces.
|
||||
|
||||
Duplicates among research results are identified among results of the same type (publications, datasets, software, other research products). If two duplicate results are aggregated one as a dataset and one as a software, for example, they will never be compared and they will never be identified as duplicates.
|
||||
OpenAIRE supports different deduplication strategies based on the type of results.
|
||||
|
||||
## Methodology overview
|
||||
The next sections describe how each stage of the deduplication workflow is faced for research results.
|
||||
|
||||
The deduplication process can be divided into two different phases:
|
||||
* Candidate identification (clustering)
|
||||
* Decision tree
|
||||
* Creation of representative record
|
||||
### Candidate identification (clustering)
|
||||
|
||||
The implementation of each phase is different based on the type of results that are being processed.
|
||||
To match the requirements of limiting the number of comparisons, OpenAIRE clustering for research products works with two functions:
|
||||
* *DOI-based function*: the function generates the DOI when this is provided as part of the record properties;
|
||||
* *Title-based function*: the function generates a key that depends on (i) number of significant words in the title (normalized, stemming, etc.), (ii) module 10 of the number of characters of such words, and (iii) a string obtained as an alternation of the function prefix(3) and suffix(3) (and vice versa) on the first 3 words (2 words if the title only has 2). For example, the title ``Search for the Standard Model Higgs Boson`` becomes ``search standard model higgs boson`` with two keys key ``5-3-seaardmod`` and ``5-3-rchstadel``.
|
||||
|
||||
### Publications
|
||||
To give an idea, this configuration generates around 77Mi blocks, which we limited to 200 records each (only 15K blocks are affected by the cut), and entails 260Bi matches.
|
||||
|
||||
#### Candidate identification (clustering)
|
||||
### Duplicates identification (pair-wise comparisons)
|
||||
|
||||
Clustering is a common heuristics used to overcome the N x N complexity required to match all pairs of objects to identify the equivalent ones. The challenge is to identify a [clustering function](./clustering-functions) that maximizes the chance of comparing only records that may lead to a match, while minimizing the number of records that will not be matched while being equivalent. Since the equivalence function is to some level tolerant to minimal errors (e.g. switching of characters in the title, or minimal difference in letters), we need this function to be not too precise (e.g. a hash of the title), but also not too flexible (e.g. random ngrams of the title). On the other hand, reality tells us that in some cases equality of two records can only be determined by their PIDs (e.g. DOI) as the metadata properties are very different across different versions and no [clustering function](./clustering-functions) will ever bring them into the same cluster. To match these requirements OpenAIRE clustering for products works with two functions:
|
||||
DOI: the function generates the DOI when this is provided as part of the record properties;
|
||||
Title-based function: the function generates a key that depends on (i) number of significant words in the title (normalized, stemming, etc.), (ii) module 10 of the number of characters of such words, and (iii) a string obtained as an alternation of the function prefix(3) and suffix(3) (and vice versa) o the first 3 words (2 words if the title only has 2). For example, the title “Entity deduplication in big data graphs for scholarly communication” becomes “entity deduplication big data graphs scholarly communication” with two keys key “7.1entionbig” and “7.1itydedbig” (where 1 is module 10 of 54 characters of the normalized title.
|
||||
Comparisons in a block are performed using a *sliding window* set to 50 records. The records are sorted lexicographically on a normalized version of their titles. The 1st record is compared against all the 50 following ones using the decision tree, then the second, etc. for an NlogN complexity.
|
||||
A different decision tree is adopted depending on the type of the entity being processed.
|
||||
Similarity relations drawn in this stage will be consequently used to perform the duplicates grouping.
|
||||
|
||||
#### Decision tree
|
||||
#### Publications
|
||||
|
||||
For each pair of publications in a cluster the following strategy (depicted in the figure below) is applied.
|
||||
Cross comparison of the pid lists (in the `pid` and `alternateid` elements). If 50% common pids, levenshtein distance on titles with low threshold (0.9).
|
||||
Otherwise, check if the number of authors and the title version is equal. If so, levenshtein distance on titles with higher threshold (0.99).
|
||||
The publications are matched as duplicate if the distance is higher than the threshold, in every other case they are considered as distinct publications.
|
||||
The comparison goes through different stages:
|
||||
1. *trusted pids check*: comparison of the trusted pid lists (in the `pid` field of the record). If at least 1 pid is equivalent, records match and the similarity relation is drawn.
|
||||
2. *instance type check*: comparison of the instance types (indicating the subtype of the record, i.e. presentation, conference object, etc.). If the instance types are not compatible then the records does not match. Otherwise, the comparison proceeds to the next stage
|
||||
3. *untrusted pids check*: comparison of all the available pids (in the `pid` and the `alternateid` fields of the record). In every case, no similarity relation is drawn in this stage. If at least one pid is equivalent, the next stage will be a *soft check*, otherwise the next stage is a *strong check*.
|
||||
4. *soft check*: comparison of the record titles with the Levenshtein distance. If the distance measure is above 0.9 then the similarity relation is drawn.
|
||||
5. *strong check*: comparison composed by three substages involving the (i) comparison of the author list sizes and the version of the record to determine if they are coherent, (ii) comparison of the record titles with the Levenshtein distance to determine if it is higher than 0.99, (iii) "smart" comparison of the author lists to check if common authors are more than 60%.
|
||||
|
||||
<p align="center">
|
||||
<img loading="lazy" alt="Deduplication workflow" src="/img/docs/dedup-results.png" width="80%" className="img_node_modules-@docusaurus-theme-classic-lib-theme-MDXComponents-Img-styles-module"/>
|
||||
<img loading="lazy" alt="Publications Decision Tree" src="/img/docs/decisiontree-publication.png" width="100%" className="img_node_modules-@docusaurus-theme-classic-lib-theme-MDXComponents-Img-styles-module"/>
|
||||
</p>
|
||||
|
||||
#### Creation of representative record
|
||||
<span className="todo">TODO</span>
|
||||
#### Software
|
||||
For each pair of software in a cluster the following strategy (depicted in the figure below) is applied.
|
||||
The comparison goes through different stages:
|
||||
1. *pids check*: comparison of the pids in the records. No similarity relation is drawn in this stage, it is only used to establish the final threshold to be used to compare record titles. If there is at least one common pid, then the next stage is a *soft check*. Otherwise, the next stage is a *strong check*
|
||||
2. *soft check*: comparison of the record titles with Levenshtein distance. If the measure is above 0.9, then the similarity relation is drawn
|
||||
3. *strong check*: comparison of the record titles with Levenshtein distance. If the measure is above 0.99, then the similarity relation is drawn
|
||||
|
||||
### Datasets
|
||||
<span className="todo">TODO</span>
|
||||
<p align="center">
|
||||
<img loading="lazy" alt="Software Decision Tree" src="/img/docs/decisiontree-software.png" width="85%" className="img_node_modules-@docusaurus-theme-classic-lib-theme-MDXComponents-Img-styles-module"/>
|
||||
</p>
|
||||
|
||||
### Software
|
||||
<span className="todo">TODO</span>
|
||||
#### Datasets and Other types of research products
|
||||
For each pair of datasets or other types of research products in a cluster the strategy depicted in the figure below is applied.
|
||||
The decision tree is almost identical to the publication decision tree, with the only exception of the *instance type check* stage. Since such type of record does not have a relatable instance type, the check is not performed and the decision tree node is skipped.
|
||||
|
||||
### Other types of research products
|
||||
<span className="todo">TODO</span>
|
||||
<p align="center">
|
||||
<img loading="lazy" alt="Dataset and Other types of research products Decision Tree" src="/img/docs/decisiontree-dataset-orp.png" width="90%" className="img_node_modules-@docusaurus-theme-classic-lib-theme-MDXComponents-Img-styles-module"/>
|
||||
</p>
|
||||
|
||||
### Duplicates grouping (transitive closure)
|
||||
|
||||
The general concept is that the field coming from the record with higher "trust" value is used as reference for the field of the representative record.
|
||||
|
||||
The IDs of the representative records are obtained by appending the prefix ``dedup_`` to the MD5 of the first ID (given their lexicographical ordering). If the group of merged records contains a trusted ID (i.e. the DOI), also the ``doi`` keyword is added to the prefix.
|
Binary file not shown.
After Width: | Height: | Size: 170 KiB |
Binary file not shown.
After Width: | Height: | Size: 181 KiB |
Binary file not shown.
After Width: | Height: | Size: 78 KiB |
Binary file not shown.
After Width: | Height: | Size: 914 KiB |
Loading…
Reference in New Issue