Skip to content

Commit

Permalink
Docs: c++11 -> c++14 (#30530)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #30530

Switch some mentions of "C++11" in the docs to "C++14"
ghstack-source-id: 95812049

Test Plan: testinprod

Differential Revision: D18733733

fbshipit-source-id: b9d0490eb3f72bad974d134bbe9eb563f6bc8775
  • Loading branch information
smessmer authored and facebook-github-bot committed Dec 17, 2019
1 parent cc8d634 commit 5554e5b
Show file tree
Hide file tree
Showing 6 changed files with 6 additions and 6 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -573,7 +573,7 @@ If you are working on the CUDA code, here are some useful CUDA debugging tips:
slow down the build process for about 50% (compared to only `DEBUG=1`), so use wisely.
2. `cuda-gdb` and `cuda-memcheck` are your best CUDA debugging friends. Unlike`gdb`,
`cuda-gdb` can display actual values in a CUDA tensor (rather than all zeros).
3. CUDA supports a lot of C++11 features such as, `std::numeric_limits`, `std::nextafter`,
3. CUDA supports a lot of C++11/14 features such as, `std::numeric_limits`, `std::nextafter`,
`std::tuple` etc. in device code. Many of such features are possible because of the
[--expt-relaxed-constexpr](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#constexpr-functions)
nvcc flag. There is a known [issue](https://github.com/ROCm-Developer-Tools/HIP/issues/374)
Expand Down
2 changes: 1 addition & 1 deletion aten/conda/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ requirements:
about:
home: https://github.com/zdevito/ATen
license: BSD
summary: A TENsor library for C++11
summary: A TENsor library for C++14

extra:
recipe-maintainers:
Expand Down
2 changes: 1 addition & 1 deletion caffe2/contrib/aten/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# An ATen operator for Caffe2

[ATen](https://github.com/zdevito/aten) is a simple tensor library thats exposes the Tensor operations in Torch
and PyTorch directly in C++11. This library provides a generated wrapper around the ATen API
and PyTorch directly in C++14. This library provides a generated wrapper around the ATen API
that makes these functions available in Caffe2 as an operator. It also makes it accessible using the
ToffeeIR.

Expand Down
2 changes: 1 addition & 1 deletion cmake/Modules_CUDA_fix/upstream/FindCUDA.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -1472,7 +1472,7 @@ macro(CUDA_WRAP_SRCS cuda_target format generated_files)
string(APPEND _cuda_nvcc_flags_config "\nset(CUDA_NVCC_FLAGS_${config_upper} ${CUDA_NVCC_FLAGS_${config_upper}} ;; ${CUDA_WRAP_OPTION_NVCC_FLAGS_${config_upper}})")
endforeach()

# Process the C++11 flag. If the host sets the flag, we need to add it to nvcc and
# Process the C++14 flag. If the host sets the flag, we need to add it to nvcc and
# remove it from the host. This is because -Xcompile -std=c++ will choke nvcc (it uses
# the C preprocessor). In order to get this to work correctly, we need to use nvcc's
# specific c++14 flag.
Expand Down
2 changes: 1 addition & 1 deletion docs/cpp/source/frontend.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
The C++ Frontend
================

The PyTorch C++ frontend is a C++11 library for CPU and GPU
The PyTorch C++ frontend is a C++14 library for CPU and GPU
tensor computation, with automatic differentiation and high level building
blocks for state of the art machine learning applications.

Expand Down
2 changes: 1 addition & 1 deletion docs/cpp/source/notes/tensor_basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Tensor Basics
=============

The ATen tensor library backing PyTorch is a simple tensor library thats exposes
the Tensor operations in Torch directly in C++11. ATen's API is auto-generated
the Tensor operations in Torch directly in C++14. ATen's API is auto-generated
from the same declarations PyTorch uses so the two APIs will track each other
over time.

Expand Down

0 comments on commit 5554e5b

Please sign in to comment.