Tags: pytorch/executorch
Tags
Update on "Use c10 version of half/bfloat16 in executorch" Accomplished by importing relevant files from c10 into executorch/runtime/core/portable_type/c10, and then using `using` in the top-level ExecuTorch headers. This approach should keep the ExecuTorch build hermetic for embedded use cases. In the future, we should add a CI job to ensure the c10 files stay identical to the PyTorch ones. Differential Revision: [D66106969](https://our.internmc.facebook.com/intern/diff/D66106969/) [ghstack-poisoned]
[0.5 release] Update pyproject with torch dependencies (#8038)
Arm backend: enable dim_order (#7952) Arm backend: enable dim_order (#7831) Add support for to_dim_order_copy With edge_compile_config.skip_dim_order = True removed, to_copy will be converted into to_dim_order_copy nodes. This commit moves our logic from to_copy into to_dim_order_copy. Signed-off-by: Oscar Andersson <oscar.andersson@arm.com> (cherry picked from commit 135e875) Co-authored-by: Oscar Andersson <87121123+oscarandersson8218@users.noreply.github.com>
[ET-VK][ez] Update requirements for partitioning to_dim_order_copy (#… …7949) Pull Request resolved: #7859 ## Context The previous registration of the to dim order copy op is incorrect. Currently, there is no implementation for the op in the Vulkan backend, but since Vulkan manages memory layout internally the op node can be removed as long as the only thing being changed is dim order. In some instances the op can be used to modify the dtype, in which case it will not be removed and the Vulkan delegate cannot execute the op correctly. Therefore, update the registration of the op to reflect this restriction. This diff should unblock enabling dim order ops for Vulkan. ghstack-source-id: 262710507 @exported-using-ghexport Differential Revision: [D68528213](https://our.internmc.facebook.com/intern/diff/D68528213/) Co-authored-by: Stephen Jia <ssjia@meta.com> (cherry picked from commit 5ee5f2f)
Update the version for apple packages on the release branch (#7652)
Trigger wheel builds when modifying setup.py or pyproject.toml These two files are the core of the wheel configs, so it makes sense to try building wheels when they change. Sort the path entries now that the list is getting longer.
Bump runner memory for llama3_2 torchtune test_model
PreviousNext