Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[autoparallel] add shard option #2696

Merged
merged 2 commits into from
Feb 15, 2023

Conversation

YuliangLiu0306
Copy link
Contributor

@YuliangLiu0306 YuliangLiu0306 commented Feb 14, 2023

📌 Checklist before creating the PR

  • I have created an issue for this PR for traceability
  • The title follows the standard format: [doc/gemini/tensor/...]: A concise description
  • I have added relevant tags if possible for us to better distinguish different PRs

🚨 Issue number

Link this PR to your issue with words like fixed to automatically close the linked issue upon merge

e.g. fixed #1234, closed #1234, resolved #1234

📝 What does this PR do?

Add shard option. Users could use it to specify solver preference and how many axes of device mesh will be used.

💥 Checklist before requesting a review

  • I have linked my PR to an issue (instruction)
  • My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
  • I have performed a self-review of my code
  • I have added thorough tests.
  • I have added docstrings for all the functions/methods I implemented

⭐️ Do you enjoy contributing to Colossal-AI?

  • 🌝 Yes, I do.
  • 🌚 No, I don't.

Tell us more if you don't enjoy contributing to Colossal-AI.

@YuliangLiu0306 YuliangLiu0306 added Run Build and Test auto-parallel related to the auto-parallel feature labels Feb 14, 2023
@YuliangLiu0306 YuliangLiu0306 linked an issue Feb 14, 2023 that may be closed by this pull request
@YuliangLiu0306 YuliangLiu0306 force-pushed the feature/add_shard_option branch from 24f087b to 00fb3c8 Compare February 15, 2023 02:41
@github-actions
Copy link
Contributor

The code coverage for the changed files is 25%.

Click me to view the complete report
Name                                                                                 Stmts   Miss  Cover
--------------------------------------------------------------------------------------------------------
colossalai/auto_parallel/tensor_shard/initialize.py                                    120     93    22%
colossalai/auto_parallel/tensor_shard/node_handler/__init__.py                          26      0   100%
colossalai/auto_parallel/tensor_shard/node_handler/node_handler.py                     164     97    41%
colossalai/auto_parallel/tensor_shard/options.py                                        20      0   100%
colossalai/auto_parallel/tensor_shard/solver/__init__.py                                 5      0   100%
colossalai/auto_parallel/tensor_shard/solver/solver.py                                 269    246     9%
colossalai/auto_parallel/tensor_shard/solver/strategies_constructor.py                 101     84    17%
tests/test_auto_parallel/test_tensor_shard/test_gpt/test_solver_with_gpt_module.py      67     47    30%
tests/test_auto_parallel/test_tensor_shard/test_metainfo/utils.py                       90     75    17%
tests/test_auto_parallel/test_tensor_shard/test_node_handler/test_shard_option.py       71     52    27%
tests/test_auto_parallel/test_tensor_shard/test_node_handler/utils.py                  124    106    15%
tests/test_auto_parallel/test_tensor_shard/test_param_resharding_cost.py                66     47    29%
tests/test_auto_parallel/test_tensor_shard/test_solver_with_resnet_v2.py                56     43    23%
--------------------------------------------------------------------------------------------------------
TOTAL                                                                                 1179    890    25%

@FrankLeeeee FrankLeeeee merged commit 21d6a48 into hpcaitech:main Feb 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-parallel related to the auto-parallel feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FEATURE]: Add shard option for autoparallel
2 participants