Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Roadmap] 0.7 Release Plan #2888

Closed
jermainewang opened this issue May 5, 2021 · 7 comments
Closed

[Roadmap] 0.7 Release Plan #2888

jermainewang opened this issue May 5, 2021 · 7 comments

Comments

@jermainewang
Copy link
Member

As usual, we want to first thank all the contributors. In the past 0.6 release, we have received 69 PRs from 33 new contributors! 11 new GNN examples are added to the repository adding the total number to 70. Let's also congratulate @nv-dlasalle, who has been actively improving many of DGL's core GPU utilities, on becoming the first community committer. If you also wish to become a DGL committer, don't hesitate to contribute to DGL today.

We have planned the following new features for 0.7:

  • [Doc] A tutorial for training a node classification model on multiple GPUs in a single machine
  • [Doc] A tutorial for training a graph classification model on multiple GPUs in a single machine
  • [Doc] A tutorial for training a node classification model on multiple machines
  • [Doc] Expand the blitz introduction series with tutorials for heterogeneous graphs
  • [Core] Differentiable sparse-sparse adjacency matrix multiplication wrapped in graph semantic
  • [Core] Differentiable sparse-sparse adjacency matrix addition wrapped in graph semantic
  • [Core] PyTorch Lightning support
  • [Core] Graclus pooling
  • [Core] Basied neighbor sampling by node type
  • [Core] Sweep all subgraph APIs and correct any inconsistent behaviors.
  • [Core] Speedup DGLGraph construction for graph classification task
  • [Core] Support creating DGLGraph directly from CSR/CSC
  • [Core] A new API for sorting graph based on src/dst
  • [Core] Enable NCCL for distributed sparse embedding across GPUs
  • [Heterograph] Extend update_all to heterograph when both message reductions are summation.
  • [Heterograph] Unify the current two RGCN implementations.
  • [Heterograph] HGT NN module
  • [Distributed] Distributed embedding with synchronized gradient updates
  • [Distributed] Allow killing all training jobs by keyboard signals (e.g., ctrl+c)
  • [Distributed] Support computing out-degrees
  • [Distributed][Doc] Doc for how to preprocess graph for link prediction
  • [Model] TGN
  • [Model] TGAT
  • [Model] GraphSIM
  • [Model] InfoGraph
  • [Model] MVGRL
  • [Model] DimeNet / DimeNet++
  • [Model] GRACE
  • [Model] DeeperGCN
  • [Model] GraphSAINT
  • [Model] node2vec
  • [Model] C&S
  • (Experimental) Utilities for visualizing GNNs by GNNVis

We warmly welcome any help from the community. Feel free to leave any comments.

@jermainewang jermainewang pinned this issue May 5, 2021
@guhaishuo
Copy link

Can we add some tutorials that use spatiotemporal graph neural networks ( such as ST-GCN ) to make predictions based on spatiotemporal data ( such as traffic flow )

@felipemello1
Copy link

Can mixed precision be accessed without having to compile it from source? I believe that Pytorch lightning has a parameter fp16 = True. Maybe make it available through [Core] PyTorch Lightning support? Thanks for all the hard work!

@licj15
Copy link

licj15 commented Jun 24, 2021

Thanks for the great work! Is there any plan about the AMD ROCm support?

@jermainewang
Copy link
Member Author

Hi @licj15 , we don't know ROCm very well so currently there is no plan to support that. We welcome any suggestions and discussions and are willing to see an RFC on the related topic.

@Mossi8
Copy link

Mossi8 commented Jul 27, 2021

Hi,

Is it already compatible with Pytorch ligthning? I was trying the graphsage unsupervised example for pytorch lightning and the edgedataloader and nodedataloader are not working for me. The classes (edgedataloader and nodedataloader) do not inherit from torch.utils.data.Dataloader. Am I missing something or it should work?

@Mossi8
Copy link

Mossi8 commented Jul 27, 2021

It seems that if I call:

trainer.fit(sage, train_dataloader=train_dataloader())

I get the following error:
TypeError: cannot unpack non-iterable _EdgeDataLoaderIter object

But it seems to be able to enter the training loop if I do:
trainer.fit(sage, train_dataloader=train_dataloader().dataloader)

however, I cannot access the .ndata attributes.

I can provide with a more detail explanation but it is based on the unspervised graphsage example for pytorch lightning.

@jermainewang
Copy link
Member Author

Hi @Mossi8 , could you please open another issue? I'm closing this since 0.7 has been released.

@jermainewang jermainewang unpinned this issue Aug 2, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants