-
Notifications
You must be signed in to change notification settings - Fork 323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting involved, possible area of work for new contributors #990
Comments
I will add some possible work items.
|
List of Learning Resources
|
ONNX standard: https://github.com/onnx/onnx/blob/master/docs/Operators.md |
Simple warmup task: split the current Here are some ops that could benefit from this: PRelu, Pow, BatchNormzlizationInferenceMode, Reshape, Unsqueeze, Squeeze, Constant, Concat, Flatten, Resize, ConstantOfShape... I would stay away from doing this operations for more complicated ops such as convolutions (some work with verifiers, others do not). Currently, info about verifier and adding them is here: https://github.com/onnx/onnx-mlir/blob/main/docs/HowToAddAnOperation.md |
##Possible areas of contributions
Below is a lit of possible new contributions that new developers can contribute. We are always ready to help you perform new tasks, a good way to get help is to open an issue and state the problem you would like to address and request for help.
1. Implementing new operations
There are plenty of operations to implement, both for the ONNX standard as well as the preprocessing ONNX.ML set of operations. There is an issue #922 where you can record which operation you want to work on, so as to avoid replicated efforts. We have a document page on the various different steps involved in adding a new op (See docs folder or CONTRIBUTING.md for links to relevant pages).
One possible "warmup" task would be to add verifiers to existing ops that are still lacking them. Another is to modernize existing operations to use the newer code generation schemes built on ONNXShapeHelper and code Builder. Look for recently added files as these are more likely to use our preferred code generation schema.
2. Improving the performance of current operations
Many operations may not be lowered using optimizing code generation schemes. May operations can benefit from memory tiling, vectorization,... If you are interested in contributing such performance improvement, look at currently optimized operations (e.g. MatMul, Gemm) in the
src/Convertion/ONNXToKrnl
subdirectories.3. Implementing a performance monitoring benchmark
If you have expertise in performance monitoring and/or tools to support performance monitoring, we could use help having one such benchmark to track the performance of simple operations and/or benchmarks.
4. Improving the documentation (either internal/external/tutorials)
If you like documenting, there is plenty of ways to help the current pages. Some maybe just too long and would benefit from splitting, some have redundant information which would benefit from reorganizing. We are also in need to better split the information between what is useful to users as opposed to developers. Tutorial would also be greatly appreciated, if you like to learn how to use the system and then make others benefit from your insights via a tutorial.
Integration with ONNX
Several folks have been interested in using ONNX-MLIR as an ONNX to ONNX tool. We have strong support for ingesting ONNX models into our project, but at this time we have no way to re-generate an ONNX protobuf file out of our ONNX dialect. If you are interested in such a project, I know that this would be greatly appreciated by several folks and projects in the community.
6. Integration within MLIR
MLIR supports also TensorFlow and increasingly Torch dialects. These could be potentially integrated with a bridge to ONNX, if you are interested in pursuing this kind of work.
The text was updated successfully, but these errors were encountered: