Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Support optional type #68793

Merged
merged 70 commits into from
Feb 16, 2022
Merged

[ONNX] Support optional type #68793

merged 70 commits into from
Feb 16, 2022

Conversation

garymm
Copy link
Collaborator

@garymm garymm commented Nov 23, 2021

Based on #61938

@pytorch-probot
Copy link

pytorch-probot bot commented Nov 23, 2021

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/garymm/pytorch/blob/21177d49c4533bf971ffd790b100e8a24b9507aa/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/trunk ✅ triggered
linux-docs ciflow/all, ciflow/cpu, ciflow/default, ciflow/docs, ciflow/linux, ciflow/trunk ✅ triggered
linux-vulkan-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7-no-ops ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
docker-builds ciflow/all, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-custom-ops ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-full-jit ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-metal ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64-full-jit ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow, ciflow/trunk 🚫 skipped
linux-docs-push ciflow/all, ciflow/cpu, ciflow/linux, ciflow/scheduled 🚫 skipped
linux-xenial-cuda11.3-py3.7-gcc7-no-ops ciflow/all, ciflow/cuda, ciflow/linux, ciflow/trunk 🚫 skipped
macos-10-15-py3-arm64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
macos-10-15-py3-lite-interpreter-x86-64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
macos-11-py3-x86-64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
parallelnative-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
periodic-libtorch-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.7-gcc7-debug ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
periodic-win-vs2019-cuda11.5-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-build ciflow/all, ciflow/android, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@garymm garymm marked this pull request as draft November 23, 2021 01:55
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Nov 23, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 5df8a58 (more details on the Dr. CI page):


  • 9/9 failures introduced in this PR

🕵️ 9 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build linux-xenial-py3.7-gcc7 / test (default, 2, 2, linux.2xlarge) (1/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-02-16T02:12:39.2118782Z RuntimeError: test_linalg failed!
2022-02-16T02:12:38.9202478Z 
2022-02-16T02:12:38.9202583Z FAILED (errors=5, skipped=61, expected failures=15)
2022-02-16T02:12:38.9202722Z 
2022-02-16T02:12:38.9202801Z Generating XML reports...
2022-02-16T02:12:38.9846591Z Generated XML report: test-reports/python-unittest/test_linalg/TEST-TestLinalgCPU-20220216021112.xml
2022-02-16T02:12:39.2114438Z Traceback (most recent call last):
2022-02-16T02:12:39.2114683Z   File "test/run_test.py", line 1101, in <module>
2022-02-16T02:12:39.2116510Z     main()
2022-02-16T02:12:39.2116701Z   File "test/run_test.py", line 1079, in main
2022-02-16T02:12:39.2118567Z     raise RuntimeError(err_message)
2022-02-16T02:12:39.2118782Z RuntimeError: test_linalg failed!
2022-02-16T02:12:39.4387857Z + cleanup
2022-02-16T02:12:39.4388162Z + retcode=1
2022-02-16T02:12:39.4388366Z + set +x
2022-02-16T02:12:39.4429046Z ##[error]Process completed with exit code 1.
2022-02-16T02:12:39.4459815Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2022-02-16T02:12:39.4460151Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m
2022-02-16T02:12:39.4460464Z �[36;1mdocker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .�[0m
2022-02-16T02:12:39.4476360Z shell: /usr/bin/bash -e {0}
2022-02-16T02:12:39.4476531Z env:
2022-02-16T02:12:39.4476744Z   BUILD_ENVIRONMENT: linux-xenial-py3.7-gcc7

See GitHub Actions build linux-bionic-py3.7-clang9 / test (default, 2, 2, linux.2xlarge) (2/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-02-16T02:10:24.0928044Z RuntimeError: test_linalg failed!
2022-02-16T02:10:23.8188535Z 
2022-02-16T02:10:23.8188697Z FAILED (errors=5, skipped=61, expected failures=15)
2022-02-16T02:10:23.8188904Z 
2022-02-16T02:10:23.8189011Z Generating XML reports...
2022-02-16T02:10:23.8828797Z Generated XML report: test-reports/python-unittest/test_linalg/TEST-TestLinalgCPU-20220216020902.xml
2022-02-16T02:10:24.0923223Z Traceback (most recent call last):
2022-02-16T02:10:24.0923493Z   File "test/run_test.py", line 1101, in <module>
2022-02-16T02:10:24.0925352Z     main()
2022-02-16T02:10:24.0925564Z   File "test/run_test.py", line 1079, in main
2022-02-16T02:10:24.0927745Z     raise RuntimeError(err_message)
2022-02-16T02:10:24.0928044Z RuntimeError: test_linalg failed!
2022-02-16T02:10:24.2863630Z 
2022-02-16T02:10:24.2863967Z real	4m29.718s
2022-02-16T02:10:24.2864360Z user	9m18.563s
2022-02-16T02:10:24.2864630Z sys	1m5.462s
2022-02-16T02:10:24.2865411Z + cleanup
2022-02-16T02:10:24.2865644Z + retcode=1
2022-02-16T02:10:24.2865850Z + set +x
2022-02-16T02:10:24.2905375Z ##[error]Process completed with exit code 1.
2022-02-16T02:10:24.2987573Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2022-02-16T02:10:24.2987909Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m

See GitHub Actions build linux-xenial-py3.7-clang7-asan / test (default, 2, 3, linux.2xlarge) (3/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-02-16T02:51:16.3151515Z RuntimeError: test_linalg failed!
2022-02-16T02:51:15.7071850Z 
2022-02-16T02:51:15.7071945Z FAILED (errors=5, skipped=65, expected failures=15)
2022-02-16T02:51:15.7072088Z 
2022-02-16T02:51:15.7072170Z Generating XML reports...
2022-02-16T02:51:15.7835325Z Generated XML report: test-reports/python-unittest/test_linalg/TEST-TestLinalgCPU-20220216024642.xml
2022-02-16T02:51:16.3143845Z Traceback (most recent call last):
2022-02-16T02:51:16.3144275Z   File "test/run_test.py", line 1101, in <module>
2022-02-16T02:51:16.3147477Z     main()
2022-02-16T02:51:16.3147805Z   File "test/run_test.py", line 1079, in main
2022-02-16T02:51:16.3151137Z     raise RuntimeError(err_message)
2022-02-16T02:51:16.3151515Z RuntimeError: test_linalg failed!
2022-02-16T02:51:16.6649986Z + cleanup
2022-02-16T02:51:16.6650320Z + retcode=1
2022-02-16T02:51:16.6650572Z + set +x
2022-02-16T02:51:16.6693939Z ##[error]Process completed with exit code 1.
2022-02-16T02:51:16.6724406Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2022-02-16T02:51:16.6724749Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m
2022-02-16T02:51:16.6725057Z �[36;1mdocker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .�[0m
2022-02-16T02:51:16.6773932Z shell: /usr/bin/bash -e {0}
2022-02-16T02:51:16.6774115Z env:
2022-02-16T02:51:16.6774327Z   BUILD_ENVIRONMENT: linux-xenial-py3.7-clang7-asan

See GitHub Actions build win-vs2019-cuda11.3-py3 / test (force_on_cpu, 1, 1, windows.4xlarge) (4/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-02-16T05:22:42.4052752Z FAIL [0.000s]: test_stft_cpu_float64 (__main__.TestFFTCPU)
2022-02-16T05:22:42.4049383Z Trying:
2022-02-16T05:22:42.4049782Z     two_ffts = torch.fft.fft(torch.fft.rfft(t, dim=1), dim=0)
2022-02-16T05:22:42.4050204Z Expecting nothing
2022-02-16T05:22:42.4050470Z ok
2022-02-16T05:22:42.4050710Z Trying:
2022-02-16T05:22:42.4051150Z     torch.testing.assert_close(rfftn, two_ffts, check_stride=False)
2022-02-16T05:22:42.4051632Z Expecting nothing
2022-02-16T05:22:42.4051896Z ok
2022-02-16T05:22:42.4052089Z 
2022-02-16T05:22:42.4052362Z ======================================================================
2022-02-16T05:22:42.4052752Z FAIL [0.000s]: test_stft_cpu_float64 (__main__.TestFFTCPU)
2022-02-16T05:22:42.4053252Z ----------------------------------------------------------------------
2022-02-16T05:22:42.4053707Z Traceback (most recent call last):
2022-02-16T05:22:42.4055165Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_device_type.py", line 376, in instantiated_test
2022-02-16T05:22:42.4055966Z     result = test(self, **param_kwargs)
2022-02-16T05:22:42.4056815Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_device_type.py", line 788, in dep_fn
2022-02-16T05:22:42.4057462Z     return fn(slf, *args, **kwargs)
2022-02-16T05:22:42.4058664Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_device_type.py", line 943, in only_fn
2022-02-16T05:22:42.4059717Z     return fn(self, *args, **kwargs)
2022-02-16T05:22:42.4060554Z   File "test_spectral_ops.py", line 913, in test_stft
2022-02-16T05:22:42.4061210Z     _test((10,), 7, center=center)

See GitHub Actions build linux-xenial-py3.7-gcc5.4 / test (default, 1, 2, linux.2xlarge) (5/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-02-16T02:36:25.7499203Z RuntimeError: test_linalg failed!
2022-02-16T02:36:25.4267471Z 
2022-02-16T02:36:25.4267578Z FAILED (errors=5, skipped=61, expected failures=15)
2022-02-16T02:36:25.4267718Z 
2022-02-16T02:36:25.4267785Z Generating XML reports...
2022-02-16T02:36:25.4912077Z Generated XML report: test-reports/python-unittest/test_linalg/TEST-TestLinalgCPU-20220216023453.xml
2022-02-16T02:36:25.7494134Z Traceback (most recent call last):
2022-02-16T02:36:25.7494397Z   File "test/run_test.py", line 1101, in <module>
2022-02-16T02:36:25.7496500Z     main()
2022-02-16T02:36:25.7496736Z   File "test/run_test.py", line 1079, in main
2022-02-16T02:36:25.7498941Z     raise RuntimeError(err_message)
2022-02-16T02:36:25.7499203Z RuntimeError: test_linalg failed!
2022-02-16T02:36:25.9921681Z + cleanup
2022-02-16T02:36:25.9921903Z + retcode=1
2022-02-16T02:36:25.9922052Z + set +x
2022-02-16T02:36:25.9962154Z ##[error]Process completed with exit code 1.
2022-02-16T02:36:25.9998059Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2022-02-16T02:36:25.9998383Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m
2022-02-16T02:36:25.9998693Z �[36;1mdocker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .�[0m
2022-02-16T02:36:26.0016865Z shell: /usr/bin/bash -e {0}
2022-02-16T02:36:26.0017047Z env:
2022-02-16T02:36:26.0017247Z   BUILD_ENVIRONMENT: linux-xenial-py3.7-gcc5.4

See GitHub Actions build linux-bionic-py3.7-clang9 / test (xla, 1, 1, linux.2xlarge) (6/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-02-16T02:10:47.1386737Z /var/lib/jenkins/w..., c10::optional, c10::optional)
2022-02-16T02:10:47.1382927Z  at::Tensor XLANativeFunctions::gelu_backward(const at::Tensor& grad,
2022-02-16T02:10:47.1383147Z             ^~~~~~~~~~~~~~~~~~
2022-02-16T02:10:47.1383380Z In file included from /var/lib/jenkins/workspace/xla/torch_xla/csrc/aten_xla_type.cpp:13:0:
2022-02-16T02:10:47.1383817Z /var/lib/jenkins/workspace/xla/torch_xla/csrc/XLANativeFunctions.h:183:19: error: candidate is: static at::Tensor torch_xla::XLANativeFunctions::gelu_backward(const at::Tensor&, const at::Tensor&)
2022-02-16T02:10:47.1384212Z  static at::Tensor gelu_backward(const at::Tensor & grad, const at::Tensor & self);
2022-02-16T02:10:47.1384443Z                    ^~~~~~~~~~~~~
2022-02-16T02:10:47.1385202Z /var/lib/jenkins/workspace/xla/torch_xla/csrc/aten_xla_type.cpp:1793:12: error: prototype for ‘at::Tensor torch_xla::XLANativeFunctions::linspace(const c10::Scalar&, const c10::Scalar&, int64_t, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>)’ does not match any in class ‘torch_xla::XLANativeFunctions’
2022-02-16T02:10:47.1385713Z  at::Tensor XLANativeFunctions::linspace(const at::Scalar& start,
2022-02-16T02:10:47.1385930Z             ^~~~~~~~~~~~~~~~~~
2022-02-16T02:10:47.1386178Z In file included from /var/lib/jenkins/workspace/xla/torch_xla/csrc/aten_xla_type.cpp:13:0:
2022-02-16T02:10:47.1386737Z /var/lib/jenkins/workspace/xla/torch_xla/csrc/XLANativeFunctions.h:209:19: error: candidate is: static at::Tensor torch_xla::XLANativeFunctions::linspace(const c10::Scalar&, const c10::Scalar&, c10::optional<long int>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>)
2022-02-16T02:10:47.1387386Z  static at::Tensor linspace(const at::Scalar & start, const at::Scalar & end, c10::optional<int64_t> steps, c10::optional<at::ScalarType> dtype, c10::optional<at::Layout> layout, c10::optional<at::Device> device, c10::optional<bool> pin_memory);
2022-02-16T02:10:47.1387743Z                    ^~~~~~~~
2022-02-16T02:10:50.3758485Z [17/179] c++ -MMD -MF /var/lib/jenkins/workspace/xla/build/temp.linux-x86_64-3.7/torch_xla/csrc/ir.o.d -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/var/lib/jenkins/workspace/xla -I/var/lib/jenkins/workspace/xla/third_party/tensorflow/bazel-tensorflow -I/var/lib/jenkins/workspace/xla/third_party/tensorflow/bazel-bin -I/var/lib/jenkins/workspace/xla/third_party/tensorflow/bazel-tensorflow/external/protobuf_archive/src -I/var/lib/jenkins/workspace/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_protobuf/src -I/var/lib/jenkins/workspace/xla/third_party/tensorflow/bazel-tensorflow/external/eigen_archive -I/var/lib/jenkins/workspace/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_absl -I/var/lib/jenkins/workspace -I/var/lib/jenkins/workspace/torch/csrc -I/var/lib/jenkins/workspace/torch/lib/tmp_install/include -I/opt/conda/lib/python3.7/site-packages/torch/include -I/opt/conda/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.7/site-packages/torch/include/TH -I/opt/conda/lib/python3.7/site-packages/torch/include/THC -I/opt/conda/include/python3.7m -c -c /var/lib/jenkins/workspace/xla/torch_xla/csrc/ir.cpp -o /var/lib/jenkins/workspace/xla/build/temp.linux-x86_64-3.7/torch_xla/csrc/ir.o -std=c++14 -Wno-sign-compare -Wno-deprecated-declarations -Wno-return-type -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_clang"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1002"' -DTORCH_EXTENSION_NAME=_XLAC -D_GLIBCXX_USE_CXX11_ABI=1
2022-02-16T02:10:50.3762527Z cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
2022-02-16T02:10:50.3763047Z In file included from /var/lib/jenkins/workspace/c10/util/Logging.h:28:0,
2022-02-16T02:10:50.3763490Z                  from /var/lib/jenkins/workspace/c10/core/TensorImpl.h:14,
2022-02-16T02:10:50.3764122Z                  from /opt/conda/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:21,
2022-02-16T02:10:50.3764811Z                  from /opt/conda/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3,
2022-02-16T02:10:50.3765296Z                  from /var/lib/jenkins/workspace/torch/csrc/lazy/core/hash.h:12,
2022-02-16T02:10:50.3765783Z                  from /var/lib/jenkins/workspace/xla/torch_xla/csrc/ir.h:19,

See GitHub Actions build win-vs2019-cpu-py3 / test (default, 1, 2, windows.4xlarge) (7/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-02-16T03:48:04.1188428Z FAIL [0.006s]: test_stft_cpu_float64 (__main__.TestFFTCPU)
2022-02-16T03:48:04.1184949Z Trying:
2022-02-16T03:48:04.1185348Z     two_ffts = torch.fft.fft(torch.fft.rfft(t, dim=1), dim=0)
2022-02-16T03:48:04.1185761Z Expecting nothing
2022-02-16T03:48:04.1186057Z ok
2022-02-16T03:48:04.1186302Z Trying:
2022-02-16T03:48:04.1186746Z     torch.testing.assert_close(rfftn, two_ffts, check_stride=False)
2022-02-16T03:48:04.1187227Z Expecting nothing
2022-02-16T03:48:04.1187495Z ok
2022-02-16T03:48:04.1187680Z 
2022-02-16T03:48:04.1188035Z ======================================================================
2022-02-16T03:48:04.1188428Z FAIL [0.006s]: test_stft_cpu_float64 (__main__.TestFFTCPU)
2022-02-16T03:48:04.1188930Z ----------------------------------------------------------------------
2022-02-16T03:48:04.1189385Z Traceback (most recent call last):
2022-02-16T03:48:04.1190841Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_device_type.py", line 376, in instantiated_test
2022-02-16T03:48:04.1191575Z     result = test(self, **param_kwargs)
2022-02-16T03:48:04.1192405Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_device_type.py", line 788, in dep_fn
2022-02-16T03:48:04.1193060Z     return fn(slf, *args, **kwargs)
2022-02-16T03:48:04.1194214Z   File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_device_type.py", line 943, in only_fn
2022-02-16T03:48:04.1195168Z     return fn(self, *args, **kwargs)
2022-02-16T03:48:04.1195981Z   File "test_spectral_ops.py", line 913, in test_stft
2022-02-16T03:48:04.1196578Z     _test((10,), 7, center=center)

See GitHub Actions build linux-xenial-py3.7-gcc5.4 / test (backwards_compat, 1, 1, linux.2xlarge) (8/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-02-16T02:15:16.3248573Z The PR is introduc...m to confirm whether this change is wanted or not.
2022-02-16T02:15:16.3235710Z processing existing schema:  text(__torch__.torch.classes.profiling.SourceRef _0) -> (str _0)
2022-02-16T02:15:16.3236649Z processing existing schema:  count(__torch__.torch.classes.profiling.InstructionStats _0) -> (int _0)
2022-02-16T02:15:16.3237667Z processing existing schema:  duration_ns(__torch__.torch.classes.profiling.InstructionStats _0) -> (int _0)
2022-02-16T02:15:16.3238725Z processing existing schema:  source(__torch__.torch.classes.profiling.SourceStats _0) -> (__torch__.torch.classes.profiling.SourceRef _0)
2022-02-16T02:15:16.3239978Z processing existing schema:  line_map(__torch__.torch.classes.profiling.SourceStats _0) -> (Dict(int, __torch__.torch.classes.profiling.InstructionStats) _0)
2022-02-16T02:15:16.3241116Z processing existing schema:  __init__(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-02-16T02:15:16.3242504Z processing existing schema:  enable(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-02-16T02:15:16.3243846Z processing existing schema:  disable(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-02-16T02:15:16.3245640Z processing existing schema:  _dump_stats(__torch__.torch.classes.profiling._ScriptProfile _0) -> (__torch__.torch.classes.profiling.SourceStats[] _0)
2022-02-16T02:15:16.3247680Z processing existing schema:  __init__(__torch__.torch.classes.dist_rpc.WorkerInfo _0, str _1, int _2) -> (NoneType _0)
2022-02-16T02:15:16.3248573Z The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not. 
2022-02-16T02:15:16.3248837Z 
2022-02-16T02:15:16.3248907Z Broken ops: [
2022-02-16T02:15:16.3249187Z 	prim::RaiseException(str msg, str? cls=None) -> ()
2022-02-16T02:15:16.3249496Z 	aten::special_round(Tensor self, *, int decimals=0) -> (Tensor)
2022-02-16T02:15:16.3249859Z 	aten::special_round.out(Tensor self, *, int decimals=0, Tensor(a!) out) -> (Tensor(a!))
2022-02-16T02:15:16.3250244Z 	aten::linalg_diagonal(Tensor(a) A, *, int offset=0, int dim1=-2, int dim2=-1) -> (Tensor(a))
2022-02-16T02:15:16.3250646Z 	aten::_linalg_svd(Tensor A, bool full_matrices=False, bool compute_uv=True) -> (Tensor U, Tensor S, Tensor Vh)
2022-02-16T02:15:16.3251161Z 	aten::_linalg_svd.U(Tensor A, bool full_matrices=False, bool compute_uv=True, *, Tensor(a!) U, Tensor(b!) S, Tensor(c!) Vh) -> (Tensor(a!) U, Tensor(b!) S, Tensor(c!) Vh)
2022-02-16T02:15:16.3251638Z 	aten::scatter_reduce.two(Tensor self, int dim, Tensor index, str reduce, *, int? output_size=None) -> (Tensor)
2022-02-16T02:15:16.3252053Z 	aten::index_copy.out(Tensor self, int dim, Tensor index, Tensor source, *, Tensor(a!) out) -> (Tensor(a!))

See GitHub Actions build linux-bionic-py3.7-clang9 / test (noarch, 1, 1, linux.2xlarge) (9/9)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-02-16T02:13:07.6887042Z RuntimeError: test_linalg failed!
2022-02-16T02:13:07.3172299Z FAILED (errors=5, skipped=722, expected failures=15)
2022-02-16T02:13:07.3172440Z 
2022-02-16T02:13:07.3172519Z Generating XML reports...
2022-02-16T02:13:07.3816056Z Generated XML report: test-reports/python-unittest/test_linalg/TEST-TestLinalgCPU-20220216021142.xml
2022-02-16T02:13:07.4559988Z Generated XML report: test-reports/python-unittest/test_linalg/TEST-TestLinalgMETA-20220216021142.xml
2022-02-16T02:13:07.6880329Z Traceback (most recent call last):
2022-02-16T02:13:07.6880987Z   File "test/run_test.py", line 1101, in <module>
2022-02-16T02:13:07.6883953Z     main()
2022-02-16T02:13:07.6884306Z   File "test/run_test.py", line 1079, in main
2022-02-16T02:13:07.6886676Z     raise RuntimeError(err_message)
2022-02-16T02:13:07.6887042Z RuntimeError: test_linalg failed!
2022-02-16T02:13:07.8784193Z 
2022-02-16T02:13:07.8784481Z real	7m8.618s
2022-02-16T02:13:07.8784723Z user	14m25.298s
2022-02-16T02:13:07.8784898Z sys	0m55.465s
2022-02-16T02:13:07.8785098Z + cleanup
2022-02-16T02:13:07.8785258Z + retcode=1
2022-02-16T02:13:07.8785396Z + set +x
2022-02-16T02:13:07.8826528Z ##[error]Process completed with exit code 1.
2022-02-16T02:13:07.8867741Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2022-02-16T02:13:07.8868099Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Nov 23, 2021
@garymm garymm marked this pull request as ready for review December 1, 2021 21:54
@garymm
Copy link
Collaborator Author

garymm commented Dec 1, 2021

@BowenBao can you review since you already started reviewing #61938?
If you're busy let me know and I'll ask someone else.

Copy link
Collaborator

@BowenBao BowenBao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @garymm for updating this pull request! I'm leaving some comments and thoughts as looking through the code.

test/onnx/test_pytorch_onnx_onnxruntime.py Show resolved Hide resolved
test/onnx/test_pytorch_onnx_onnxruntime.py Outdated Show resolved Hide resolved
torch/csrc/jit/passes/onnx/fixup_onnx_controlflow.cpp Outdated Show resolved Hide resolved
torch/csrc/jit/passes/onnx/fixup_onnx_controlflow.cpp Outdated Show resolved Hide resolved
@@ -508,7 +624,13 @@ void FixupONNXControlflowNodeOutputs(Node* n) {
for (auto i : c10::irange(n->outputs().size())) {
auto type = n->blocks().at(0)->outputs().at(i + 1)->type();
if (i < loop_carried_output_size) {
n->output(i)->setType(type);
if (auto none_type = n->output(i)->type()->cast<NoneType>()) {
n->output(i)->setType(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we know why propagating shape from block output doesn't work for optional type values?

torch/csrc/jit/serialization/export.cpp Outdated Show resolved Hide resolved
torch/onnx/symbolic_opset9.py Outdated Show resolved Hide resolved
torch/onnx/utils.py Outdated Show resolved Hide resolved
@albanD albanD removed their request for review January 7, 2022 00:26
@soulitzer soulitzer removed their request for review January 18, 2022 18:53
@garymm garymm marked this pull request as draft January 20, 2022 01:39
@garymm garymm marked this pull request as ready for review January 21, 2022 22:24
Copy link
Collaborator Author

@garymm garymm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@BowenBao I think this is ready for review. The check failures seem unrelated.

LMK if you'd like to do this in real time together.

torch/csrc/jit/passes/onnx/fixup_onnx_controlflow.cpp Outdated Show resolved Hide resolved
test/onnx/test_pytorch_onnx_onnxruntime.py Outdated Show resolved Hide resolved
torch/csrc/jit/passes/onnx/fixup_onnx_controlflow.cpp Outdated Show resolved Hide resolved
test/onnx/test_pytorch_onnx_onnxruntime.py Show resolved Hide resolved
torch/csrc/jit/serialization/export.cpp Outdated Show resolved Hide resolved
torch/onnx/utils.py Outdated Show resolved Hide resolved
@BowenBao
Copy link
Collaborator

Thanks @garymm, I will try to read through the code changes first.

@BowenBao
Copy link
Collaborator

Conflicting files due to out of sync, @garymm could you please do another rebase with latest onnx_ms_1 branch?

@garymm
Copy link
Collaborator Author

garymm commented Jan 27, 2022

@BowenBao rebased

Copy link
Collaborator

@BowenBao BowenBao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for taking over this PR. Still reading through changes, leaving questions/comments along the way

test/onnx/test_pytorch_onnx_no_runtime.py Show resolved Hide resolved
test/onnx/test_pytorch_onnx_no_runtime.py Outdated Show resolved Hide resolved
test/onnx/test_pytorch_onnx_no_runtime.py Outdated Show resolved Hide resolved
test/onnx/test_pytorch_onnx_onnxruntime.py Outdated Show resolved Hide resolved
test/onnx/test_pytorch_onnx_onnxruntime.py Outdated Show resolved Hide resolved
test/onnx/test_pytorch_onnx_onnxruntime.py Show resolved Hide resolved
test/onnx/test_pytorch_onnx_onnxruntime.py Outdated Show resolved Hide resolved
test/onnx/test_pytorch_onnx_onnxruntime.py Show resolved Hide resolved
test/onnx/test_pytorch_onnx_onnxruntime.py Show resolved Hide resolved
BowenBao added a commit that referenced this pull request Mar 24, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: garymm <garymiguelmicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

Differential Revision: [D34625646](https://our.internmc.facebook.com/intern/diff/D34625646)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Mar 24, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: garymm <garymiguelmicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

Differential Revision: [D34625646](https://our.internmc.facebook.com/intern/diff/D34625646)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Mar 24, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: BowenBao <bowbaomicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

ghstack-source-id: ebe004cc165e65ab6fd78ac66aff4783f9f34933
Pull Request resolved: #73284
BowenBao added a commit that referenced this pull request Mar 24, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: garymm <garymiguelmicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

Differential Revision: [D34625646](https://our.internmc.facebook.com/intern/diff/D34625646)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Mar 24, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: garymm <garymiguelmicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

Differential Revision: [D34625646](https://our.internmc.facebook.com/intern/diff/D34625646)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Mar 24, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: BowenBao <bowbaomicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

ghstack-source-id: 2c44c9be98317e47a2429dfe4acda2e7743eff40
Pull Request resolved: #73284
BowenBao added a commit that referenced this pull request Mar 31, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: garymm <garymiguelmicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

Differential Revision: [D34625646](https://our.internmc.facebook.com/intern/diff/D34625646)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Mar 31, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: BowenBao <bowbaomicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

ghstack-source-id: c76dcce0a33e716cf1b4b4e62f99d236d8c581d4
Pull Request resolved: #73284
BowenBao added a commit that referenced this pull request Mar 31, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: garymm <garymiguelmicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

Differential Revision: [D34625646](https://our.internmc.facebook.com/intern/diff/D34625646)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Apr 14, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: BowenBao <bowbaomicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

ghstack-source-id: d84eef104432adaa1160571590adf9531f3c977e
Pull Request resolved: #73284
BowenBao added a commit that referenced this pull request Apr 14, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: garymm <garymiguelmicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

Differential Revision: [D34625646](https://our.internmc.facebook.com/intern/diff/D34625646)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Apr 14, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: garymm <garymiguelmicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

Differential Revision: [D34625646](https://our.internmc.facebook.com/intern/diff/D34625646)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Apr 26, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: garymm <garymiguelmicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

Differential Revision: [D34625646](https://our.internmc.facebook.com/intern/diff/D34625646)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Apr 26, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: garymm <garymiguelmicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

Differential Revision: [D34625646](https://our.internmc.facebook.com/intern/diff/D34625646)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Apr 26, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: garymm <garymiguelmicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

Differential Revision: [D34625646](https://our.internmc.facebook.com/intern/diff/D34625646)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Apr 26, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: garymm <garymiguelmicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

Differential Revision: [D34625646](https://our.internmc.facebook.com/intern/diff/D34625646)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Apr 26, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: BowenBao <bowbaomicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

ghstack-source-id: 48155ca020dd22a932da9263d9e616678251c6af
Pull Request resolved: #73284
BowenBao added a commit that referenced this pull request Apr 28, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: garymm <garymiguelmicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

Differential Revision: [D34625646](https://our.internmc.facebook.com/intern/diff/D34625646)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Apr 28, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: garymm <garymiguelmicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

Differential Revision: [D34625646](https://our.internmc.facebook.com/intern/diff/D34625646)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Apr 28, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: BowenBao <bowbaomicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

ghstack-source-id: c56814f56de2cc66ea6ed17042c60d454f33b455
Pull Request resolved: #73284
BowenBao added a commit that referenced this pull request May 2, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: garymm <garymiguelmicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

Differential Revision: [D34625646](https://our.internmc.facebook.com/intern/diff/D34625646)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request May 2, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: garymm <garymiguelmicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

Differential Revision: [D34625646](https://our.internmc.facebook.com/intern/diff/D34625646)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request May 2, 2022
Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Co-authored-by: BowenBao <bowbaomicrosoft.com>
Co-authored-by: neginraoof <neginmrutexas.edu>

ghstack-source-id: 9ef86bbec6336e5533d6cc7c960160e695fa88d6
Pull Request resolved: #73284
facebook-github-bot pushed a commit that referenced this pull request May 4, 2022
Summary:
Pull Request resolved: #73284

Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D34625646

Pulled By: malfet

fbshipit-source-id: 537fcbc1e9d87686cc61f5bd66a997e99cec287b

Co-authored-by: BowenBao <bowbao@microsoft.com>
Co-authored-by: neginraoof <neginmr@utexas.edu>
Co-authored-by: Nikita Shulga <nshulga@fb.com>
pytorchmergebot pushed a commit that referenced this pull request May 4, 2022
Summary:
Pull Request resolved: #73284

Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D34625646

Pulled By: malfet

fbshipit-source-id: 537fcbc1e9d87686cc61f5bd66a997e99cec287b

Co-authored-by: BowenBao <bowbao@microsoft.com>
Co-authored-by: neginraoof <neginmr@utexas.edu>
Co-authored-by: Nikita Shulga <nshulga@fb.com>
(cherry picked from commit 822e79f)
@garymm garymm deleted the optional branch May 4, 2022 20:29
atalman added a commit to atalman/pytorch that referenced this pull request May 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed oncall: jit Add this issue/PR to JIT oncall triage queue open source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants