Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename inverse relations flag #1093

Draft
wants to merge 5 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions docs/source/byo/data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,14 +47,14 @@ The remainder of the examples will be for :func:`pykeen.pipeline.pipeline`, but
for :func:`pykeen.hpo.hpo_pipeline`.

If you want to add dataset-wide arguments, you can use the ``dataset_kwargs`` argument
to the :class:`pykeen.pipeline.pipeline` to enable options like ``create_inverse_triples=True``.
to the :class:`pykeen.pipeline.pipeline` to enable options like ``use_inverse_relations=True``.

>>> from pykeen.pipeline import pipeline
>>> from pykeen.datasets.nations import NATIONS_TRAIN_PATH, NATIONS_TEST_PATH
>>> result = pipeline(
... training=NATIONS_TRAIN_PATH,
... testing=NATIONS_TEST_PATH,
... dataset_kwargs={'create_inverse_triples': True},
... dataset_kwargs={'use_inverse_relations': True},
... model='TransE',
... epochs=5, # short epochs for testing - you should go higher
... )
Expand Down Expand Up @@ -89,21 +89,21 @@ TSV files, you can use the :class:`pykeen.triples.TriplesFactory` interface.
the wrong identifiers in the training set during evaluation, and we'd get nonsense results.

The ``dataset_kwargs`` argument is ignored when passing your own :class:`pykeen.triples.TriplesFactory`, so be
sure to include the ``create_inverse_triples=True`` in the instantiation of those classes if that's your
sure to include the ``use_inverse_relations=True`` in the instantiation of those classes if that's your
desired behavior as in:

>>> from pykeen.triples import TriplesFactory
>>> from pykeen.pipeline import pipeline
>>> from pykeen.datasets.nations import NATIONS_TRAIN_PATH, NATIONS_TEST_PATH
>>> training = TriplesFactory.from_path(
... NATIONS_TRAIN_PATH,
... create_inverse_triples=True,
... use_inverse_relations=True,
... )
>>> testing = TriplesFactory.from_path(
... NATIONS_TEST_PATH,
... entity_to_id=training.entity_to_id,
... relation_to_id=training.relation_to_id,
... create_inverse_triples=True,
... use_inverse_relations=True,
... )
>>> result = pipeline(
... training=training,
Expand Down
8 changes: 4 additions & 4 deletions docs/source/tutorial/inductive_lp.rst
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ Let's create a basic `InductiveNodePiece` using one of the `InductiveFB15k237` d
from pykeen.models.inductive import InductiveNodePiece
from pykeen.losses import NSSALoss

dataset = InductiveFB15k237(version="v1", create_inverse_triples=True)
dataset = InductiveFB15k237(version="v1", use_inverse_relations=True)

model = InductiveNodePiece(
triples_factory=dataset.transductive_training, # training factory, used to tokenize training nodes
Expand All @@ -110,7 +110,7 @@ Creating a message-passing version of NodePiece is pretty much the same:
from pykeen.models.inductive import InductiveNodePieceGNN
from pykeen.losses import NSSALoss

dataset = InductiveFB15k237(version="v1", create_inverse_triples=True)
dataset = InductiveFB15k237(version="v1", use_inverse_relations=True)

model = InductiveNodePieceGNN(
triples_factory=dataset.transductive_training, # training factory, will be also used for a GNN
Expand Down Expand Up @@ -166,7 +166,7 @@ Let's create a training loop and validation / test evaluators:
from pykeen.evaluation.rank_based_evaluator import SampledRankBasedEvaluator
from pykeen.losses import NSSALoss

dataset = InductiveFB15k237(version="v1", create_inverse_triples=True)
dataset = InductiveFB15k237(version="v1", use_inverse_relations=True)

model = ... # model init here, one of InductiveNodePiece
optimizer = ... # some optimizer
Expand Down Expand Up @@ -207,7 +207,7 @@ in the sLCWA mode with 32 negative samples per positive, with NSSALoss, and Samp

from torch.optim import Adam

dataset = InductiveFB15k237(version="v1", create_inverse_triples=True)
dataset = InductiveFB15k237(version="v1", use_inverse_relations=True)

model = InductiveNodePieceGNN(
triples_factory=dataset.transductive_training, # training factory, will be also used for a GNN
Expand Down
8 changes: 4 additions & 4 deletions docs/source/tutorial/node_piece.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ throughout the following examples.
from pykeen.datasets import FB15k237

# inverses are necessary for the current version of NodePiece
dataset = FB15k237(create_inverse_triples=True)
dataset = FB15k237(use_inverse_relations=True)

In the simplest usage of :class:`pykeen.models.NodePiece`, we'll only
use relations for tokenization. We can do this by with the following
Expand Down Expand Up @@ -286,7 +286,7 @@ Let's pack the last NodePiece model into the pipeline:
result = pipeline(
dataset="fb15k237",
dataset_kwargs=dict(
create_inverse_triples=True,
use_inverse_relations=True,
),
model=NodePiece,
model_kwargs=dict(
Expand Down Expand Up @@ -498,7 +498,7 @@ pipeline:
result = pipeline(
dataset="fb15k237",
dataset_kwargs=dict(
create_inverse_triples=True,
use_inverse_relations=True,
),
model=NodePiece,
model_kwargs=dict(
Expand Down Expand Up @@ -590,7 +590,7 @@ Let's use the new tokenizer for the Wikidata5M graph of 5M nodes and 20M edges.

from pykeen.datasets import Wikidata5M

dataset = Wikidata5M(create_inverse_triples=True)
dataset = Wikidata5M(use_inverse_relations=True)

model = NodePiece(
triples_factory=dataset.training,
Expand Down
16 changes: 8 additions & 8 deletions docs/source/tutorial/running_ablation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ as ``title`` are special and used by PyKEEN and :mod:`optuna`.
... )

As mentioned above, we also want to measure the effect of explicitly modeling inverse relations on the model's
performance. Therefore, we extend the ablation study by including the ``create_inverse_triples`` argument:
performance. Therefore, we extend the ablation study by including the ``use_inverse_relations`` argument:

.. code-block:: python

Expand All @@ -82,7 +82,7 @@ performance. Therefore, we extend the ablation study by including the ``create_i
... training_loops=["LCWA"],
... optimizers=["Adam"],
... # Add inverse triples with
... create_inverse_triples=[True, False],
... use_inverse_relations=[True, False],
... # Fast testing configuration, make bigger in prod
... epochs=1,
... n_trials=1,
Expand All @@ -91,10 +91,10 @@ performance. Therefore, we extend the ablation study by including the ``create_i
.. note::

Unlike ``models``, ``datasets``, ``losses``, ``training_loops``, and ``optimizers``,
``create_inverse_triples`` has a default value, which is ``False``.
``use_inverse_relations`` has a default value, which is ``False``.

If there is only one value for either the ``models``, ``datasets``, ``losses``, ``training_loops``, ``optimizers``,
or ``create_inverse_triples`` argument, it can be given as a single value instead of the list.
or ``use_inverse_relations`` argument, it can be given as a single value instead of the list.

.. code-block:: python

Expand All @@ -107,7 +107,7 @@ or ``create_inverse_triples`` argument, it can be given as a single value instea
... losses=["BCEAfterSigmoidLoss", "MarginRankingLoss"],
... training_loops="LCWA",
... optimizers="Adam",
... create_inverse_triples=[True, False],
... use_inverse_relations=[True, False],
... # Fast testing configuration, make bigger in prod
... epochs=1,
... n_trials=1,
Expand Down Expand Up @@ -200,7 +200,7 @@ the best model of each ablation-experiment using the argument ``best_replicates`
... losses=["BCEAfterSigmoidLoss", "MarginRankingLoss"],
... training_loops=["LCWA"],
... optimizers=["Adam"],
... create_inverse_triples=[True, False],
... use_inverse_relations=[True, False],
... stopper="early",
... stopper_kwargs={
... "frequency": 5,
Expand Down Expand Up @@ -384,7 +384,7 @@ Now that we defined our own hyper-parameter values/ranges, let's have a look at
>>> losses = ["BCEAfterSigmoidLoss"]
>>> training_loops = ["lcwa"]
>>> optimizers = ["adam"]
>>> create_inverse_triples= [True, False]
>>> use_inverse_relations= [True, False]
>>> stopper = "early"
>>> stopper_kwargs = {
... "frequency": 5,
Expand Down Expand Up @@ -513,7 +513,7 @@ defined within our program would look as follows:
"losses": ["BCEAfterSigmoidLoss", "CrossEntropyLoss"]
"training_loops": ["lcwa"],
"optimizers": ["adam"],
"create_inverse_triples": [true,false],
"use_inverse_relations": [true,false],
"stopper": "early"
"stopper_kwargs": {
"frequency": 5,
Expand Down
Loading