PyKEEN (Python KnowlEdge EmbeddiNgs) is a Python package designed to train and evaluate knowledge graph embedding models (incorporating multi-modal information).
Installation • Quickstart • Datasets (37) • Inductive Datasets (5) • Models (40) • Support • Citation
The latest stable version of PyKEEN requires Python 3.9+. It can be downloaded and installed from PyPI with:
pip install pykeen
The latest version of PyKEEN can be installed directly from the source code on GitHub with:
pip install git+https://github.com/pykeen/pykeen.git
More information about installation (e.g., development mode, Windows installation, Colab, Kaggle, extras) can be found in the installation documentation.
This example shows how to train a model on a dataset and test on another dataset.
The fastest way to get up and running is to use the pipeline function. It provides a high-level entry into the extensible functionality of this package. The following example shows how to train and evaluate the TransE model on the Nations dataset. By default, the training loop uses the stochastic local closed world assumption (sLCWA) training approach and evaluates with rank-based evaluation.
from pykeen.pipeline import pipeline
result = pipeline(
model='TransE',
dataset='nations',
)
The results are returned in an instance of the PipelineResult dataclass that has attributes for the trained model, the training loop, the evaluation, and more. See the tutorials on using your own dataset, understanding the evaluation, and making novel link predictions.
PyKEEN is extensible such that:
- Each model has the same API, so anything from
pykeen.models
can be dropped in - Each training loop has the same API, so
pykeen.training.LCWATrainingLoop
can be dropped in - Triples factories can be generated by the user with
from pykeen.triples.TriplesFactory
The full documentation can be found at https://pykeen.readthedocs.io.
Below are the models, datasets, training modes, evaluators, and metrics implemented
in pykeen
.
The following 37 datasets are built in to PyKEEN. The citation for each dataset corresponds to either the paper describing the dataset, the first paper published using the dataset with knowledge graph embedding models, or the URL for the dataset if neither of the first two are available. If you want to use a custom dataset, see the Bring Your Own Dataset tutorial. If you have a suggestion for another dataset to include in PyKEEN, please let us know here.
The following 5 inductive datasets are built in to PyKEEN.
Name | Documentation | Citation |
---|---|---|
ILPC2022 Large | pykeen.datasets.ILPC2022Large |
Galkin et al., 2022 |
ILPC2022 Small | pykeen.datasets.ILPC2022Small |
Galkin et al., 2022 |
FB15k-237 | pykeen.datasets.InductiveFB15k237 |
Teru et al., 2020 |
NELL | pykeen.datasets.InductiveNELL |
Teru et al., 2020 |
WordNet-18 (RR) | pykeen.datasets.InductiveWN18RR |
Teru et al., 2020 |
The following 20 representations are implemented by PyKEEN.
The following 34 interactions are implemented by PyKEEN.
The following 40 models are implemented by PyKEEN.
The following 15 losses are implemented by PyKEEN.
Name | Reference | Description |
---|---|---|
Adversarially weighted binary cross entropy (with logits) | pykeen.losses.AdversarialBCEWithLogitsLoss |
An adversarially weighted BCE loss. |
Binary cross entropy (after sigmoid) | pykeen.losses.BCEAfterSigmoidLoss |
The numerically unstable version of explicit Sigmoid + BCE loss. |
Binary cross entropy (with logits) | pykeen.losses.BCEWithLogitsLoss |
The binary cross entropy loss. |
Cross entropy | pykeen.losses.CrossEntropyLoss |
The cross entropy loss that evaluates the cross entropy after softmax output. |
Double Margin | pykeen.losses.DoubleMarginLoss |
A limit-based scoring loss, with separate margins for positive and negative elements from [sun2018]_. |
Focal | pykeen.losses.FocalLoss |
The focal loss proposed by [lin2018]_. |
InfoNCE loss with additive margin | pykeen.losses.InfoNCELoss |
The InfoNCE loss with additive margin proposed by [wang2022]_. |
Margin ranking | pykeen.losses.MarginRankingLoss |
The pairwise hinge loss (i.e., margin ranking loss). |
Mean squared error | pykeen.losses.MSELoss |
The mean squared error loss. |
Self-adversarial negative sampling | pykeen.losses.NSSALoss |
The self-adversarial negative sampling loss function proposed by [sun2019]_. |
Pairwise logistic | pykeen.losses.PairwiseLogisticLoss |
The pairwise logistic loss. |
Pointwise Hinge | pykeen.losses.PointwiseHingeLoss |
The pointwise hinge loss. |
Soft margin ranking | pykeen.losses.SoftMarginRankingLoss |
The soft pairwise hinge loss (i.e., soft margin ranking loss). |
Softplus | pykeen.losses.SoftplusLoss |
The pointwise logistic loss (i.e., softplus loss). |
Soft Pointwise Hinge | pykeen.losses.SoftPointwiseHingeLoss |
The soft pointwise hinge loss. |
The following 6 regularizers are implemented by PyKEEN.
Name | Reference | Description |
---|---|---|
combined | pykeen.regularizers.CombinedRegularizer |
A convex combination of regularizers. |
lp | pykeen.regularizers.LpRegularizer |
A simple L_p norm based regularizer. |
no | pykeen.regularizers.NoRegularizer |
A regularizer which does not perform any regularization. |
normlimit | pykeen.regularizers.NormLimitRegularizer |
A regularizer which formulates a soft constraint on a maximum norm. |
orthogonality | pykeen.regularizers.OrthogonalityRegularizer |
A regularizer for the soft orthogonality constraints from [wang2014]_. |
powersum | pykeen.regularizers.PowerSumRegularizer |
A simple x^p based regularizer. |
The following 3 training loops are implemented in PyKEEN.
Name | Reference | Description |
---|---|---|
lcwa | pykeen.training.LCWATrainingLoop |
A training loop that is based upon the local closed world assumption (LCWA). |
slcwa | pykeen.training.SLCWATrainingLoop |
A training loop that uses the stochastic local closed world assumption training approach. |
symmetriclcwa | pykeen.training.SymmetricLCWATrainingLoop |
A "symmetric" LCWA scoring heads and tails at once. |
The following 3 negative samplers are implemented in PyKEEN.
Name | Reference | Description |
---|---|---|
basic | pykeen.sampling.BasicNegativeSampler |
A basic negative sampler. |
bernoulli | pykeen.sampling.BernoulliNegativeSampler |
An implementation of the Bernoulli negative sampling approach proposed by [wang2014]_. |
pseudotyped | pykeen.sampling.PseudoTypedNegativeSampler |
A sampler that accounts for which entities co-occur with a relation. |
The following 2 stoppers are implemented in PyKEEN.
Name | Reference | Description |
---|---|---|
early | pykeen.stoppers.EarlyStopper |
A harness for early stopping. |
nop | pykeen.stoppers.NopStopper |
A stopper that does nothing. |
The following 5 evaluators are implemented in PyKEEN.
Name | Reference | Description |
---|---|---|
classification | pykeen.evaluation.ClassificationEvaluator |
An evaluator that uses a classification metrics. |
macrorankbased | pykeen.evaluation.MacroRankBasedEvaluator |
Macro-average rank-based evaluation. |
ogb | pykeen.evaluation.OGBEvaluator |
A sampled, rank-based evaluator that applies a custom OGB evaluation. |
rankbased | pykeen.evaluation.RankBasedEvaluator |
A rank-based evaluator for KGE models. |
sampledrankbased | pykeen.evaluation.SampledRankBasedEvaluator |
A rank-based evaluator using sampled negatives instead of all negatives. |
The following 44 metrics are implemented in PyKEEN.
Name | Interval | Direction | Description | Type |
---|---|---|---|---|
Accuracy | 📈 | The ratio of the number of correct classifications to the total number. | Classification | |
Area Under The Receiver Operating Characteristic Curve | 📈 | The area under the receiver operating characteristic curve. | Classification | |
Average Precision Score | 📈 | The average precision across different thresholds. | Classification | |
Balanced Accuracy Score | 📈 | The average of recall obtained on each class. | Classification | |
Diagnostic Odds Ratio | 📈 | The ratio of positive and negative likelihood ratio. | Classification | |
F1 Score | 📈 | The harmonic mean of precision and recall. | Classification | |
False Discovery Rate | 📉 | The proportion of predicted negatives which are true positive. | Classification | |
False Negative Rate | 📉 | The probability that a truly positive triple is predicted negative. | Classification | |
False Omission Rate | 📉 | The proportion of predicted positives which are true negative. | Classification | |
False Positive Rate | 📉 | The probability that a truly negative triple is predicted positive. | Classification | |
Fowlkes Mallows Index | 📈 | The Fowlkes Mallows index. | Classification | |
Informedness | 📈 | The informedness metric. | Classification | |
Matthews Correlation Coefficient | 📈 | The Matthews Correlation Coefficient (MCC). | Classification | |
Negative Likelihood Ratio | 📉 | The ratio of false positive rate to true positive rate. | Classification | |
Negative Predictive Value | 📈 | The proportion of predicted negatives which are true negatives. | Classification | |
Number of Scores | 📈 | The number of scores. | Classification | |
Positive Likelihood Ratio | 📈 | The ratio of true positive rate to false positive rate. | Classification | |
Positive Predictive Value | 📈 | The proportion of predicted positives which are true positive. | Classification | |
Prevalence Threshold | 📉 | The prevalence threshold. | Classification | |
Threat Score | 📈 | The harmonic mean of precision and recall. | Classification | |
True Negative Rate | 📈 | The probability that a truly false triple is predicted negative. | Classification | |
True Positive Rate | 📈 | The probability that a truly positive triple is predicted positive. | Classification | |
Adjusted Arithmetic Mean Rank (AAMR) | 📉 | The mean over all ranks divided by its expected value. | Ranking | |
Adjusted Arithmetic Mean Rank Index (AAMRI) | 📈 | The re-indexed adjusted mean rank (AAMR) | Ranking | |
Adjusted Geometric Mean Rank Index (AGMRI) | 📈 | The re-indexed adjusted geometric mean rank (AGMRI) | Ranking | |
Adjusted Hits at K | 📈 | The re-indexed adjusted hits at K | Ranking | |
Adjusted Inverse Harmonic Mean Rank | 📈 | The re-indexed adjusted MRR | Ranking | |
Geometric Mean Rank (GMR) | 📉 | The geometric mean over all ranks. | Ranking | |
Harmonic Mean Rank (HMR) | 📉 | The harmonic mean over all ranks. | Ranking | |
Hits @ K | 📈 | The relative frequency of ranks not larger than a given k. | Ranking | |
Inverse Arithmetic Mean Rank (IAMR) | 📈 | The inverse of the arithmetic mean over all ranks. | Ranking | |
Inverse Geometric Mean Rank (IGMR) | 📈 | The inverse of the geometric mean over all ranks. | Ranking | |
Inverse Median Rank | 📈 | The inverse of the median over all ranks. | Ranking | |
Mean Rank (MR) | 📉 | The arithmetic mean over all ranks. | Ranking | |
Mean Reciprocal Rank (MRR) | 📈 | The inverse of the harmonic mean over all ranks. | Ranking | |
Median Rank | 📉 | The median over all ranks. | Ranking | |
z-Geometric Mean Rank (zGMR) | 📈 | The z-scored geometric mean rank | Ranking | |
z-Hits at K | 📈 | The z-scored hits at K | Ranking | |
z-Mean Rank (zMR) | 📈 | The z-scored mean rank | Ranking | |
z-Mean Reciprocal Rank (zMRR) | 📈 | The z-scored mean reciprocal rank | Ranking |
The following 8 trackers are implemented in PyKEEN.
Name | Reference | Description |
---|---|---|
console | pykeen.trackers.ConsoleResultTracker |
A class that directly prints to console. |
csv | pykeen.trackers.CSVResultTracker |
Tracking results to a CSV file. |
json | pykeen.trackers.JSONResultTracker |
Tracking results to a JSON lines file. |
mlflow | pykeen.trackers.MLFlowResultTracker |
A tracker for MLflow. |
neptune | pykeen.trackers.NeptuneResultTracker |
A tracker for Neptune.ai. |
python | pykeen.trackers.PythonResultTracker |
A tracker which stores everything in Python dictionaries. |
tensorboard | pykeen.trackers.TensorBoardResultTracker |
A tracker for TensorBoard. |
wandb | pykeen.trackers.WANDBResultTracker |
A tracker for Weights and Biases. |
PyKEEN includes a set of curated experimental settings for reproducing past landmark experiments. They can be accessed and run like:
pykeen experiments reproduce tucker balazevic2019 fb15k
Where the three arguments are the model name, the reference, and the dataset.
The output directory can be optionally set with -d
.
PyKEEN includes the ability to specify ablation studies using the hyper-parameter optimization module. They can be run like:
pykeen experiments ablation ~/path/to/config.json
We used PyKEEN to perform a large-scale reproducibility and benchmarking study which are described in our article:
@article{ali2020benchmarking,
author={Ali, Mehdi and Berrendorf, Max and Hoyt, Charles Tapley and Vermue, Laurent and Galkin, Mikhail and Sharifzadeh, Sahand and Fischer, Asja and Tresp, Volker and Lehmann, Jens},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Bringing Light Into the Dark: A Large-scale Evaluation of Knowledge Graph Embedding Models under a Unified Framework},
year={2021},
pages={1-1},
doi={10.1109/TPAMI.2021.3124805}}
}
We have made all code, experimental configurations, results, and analyses that lead to our interpretations available at https://github.com/pykeen/benchmarking.
Contributions, whether filing an issue, making a pull request, or forking, are appreciated. See CONTRIBUTING.md for more information on getting involved.
If you have questions, please use the GitHub discussions feature at https://github.com/pykeen/pykeen/discussions/new.
This project has been supported by several organizations (in alphabetical order):
- Bayer
- CoronaWhy
- Enveda Biosciences
- Fraunhofer Institute for Algorithms and Scientific Computing
- Fraunhofer Institute for Intelligent Analysis and Information Systems
- Fraunhofer Center for Machine Learning
- Harvard Program in Therapeutic Science - Laboratory of Systems Pharmacology
- Ludwig-Maximilians-Universität München
- Munich Center for Machine Learning (MCML)
- Siemens
- Smart Data Analytics Research Group (University of Bonn & Fraunhofer IAIS)
- Technical University of Denmark - DTU Compute - Section for Cognitive Systems
- Technical University of Denmark - DTU Compute - Section for Statistics and Data Analysis
- University of Bonn
The development of PyKEEN has been funded by the following grants:
Funding Body | Program | Grant |
---|---|---|
DARPA | Young Faculty Award (PI: Benjamin Gyori) | W911NF2010255 |
DARPA | Automating Scientific Knowledge Extraction (ASKE) | HR00111990009 |
German Federal Ministry of Education and Research (BMBF) | Maschinelles Lernen mit Wissensgraphen (MLWin) | 01IS18050D |
German Federal Ministry of Education and Research (BMBF) | Munich Center for Machine Learning (MCML) | 01IS18036A |
Innovation Fund Denmark (Innovationsfonden) | Danish Center for Big Data Analytics driven Innovation (DABAI) | Grand Solutions |
The PyKEEN logo was designed by Carina Steinborn
If you have found PyKEEN useful in your work, please consider citing our article:
@article{ali2021pykeen,
author = {Ali, Mehdi and Berrendorf, Max and Hoyt, Charles Tapley and Vermue, Laurent and Sharifzadeh, Sahand and Tresp, Volker and Lehmann, Jens},
journal = {Journal of Machine Learning Research},
number = {82},
pages = {1--6},
title = {{PyKEEN 1.0: A Python Library for Training and Evaluating Knowledge Graph Embeddings}},
url = {http://jmlr.org/papers/v22/20-825.html},
volume = {22},
year = {2021}
}