Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add output size to attentive hparams #1133

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

KnathanM
Copy link
Member

Closes #1120

The AttentiveAggregation class uses a single learned linear layer to calculate what the weight of each node should be in aggregation based on the node's learned representation. I don't see this as an option in v1, so I think it is new to v2, but I don't really know its background.

In any event, it needs to know what the length of the node representation is so it can instantiate a linear layer. We need to save this length as a hyperparameter under aggregation so that we can reload it from a checkpoint.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[v2 BUG]: Loading a model with AttentiveAggregation fails from a checkpoint
1 participant