Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

修复 paddle.assign 等 API 的文档 #4850

Merged
merged 15 commits into from
Jun 6, 2022
Merged
24 changes: 6 additions & 18 deletions docs/api/paddle/assign_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,36 +3,24 @@
assign
-------------------------------

.. py:function:: paddle.assign(x,output=None)
.. py:function:: paddle.assign(x, output=None)




将输入Tensor或numpy数组拷贝至输出Tensor
将输入数据拷贝至输出 Tensor

参数
::::::::::::

- **x** (Tensor|np.ndarray|list|tuple|scalar) - 输入Tensor,或numpy数组,或由基本数据组成的list/tuple,或基本数据,支持数据类型为float32, float64, int32, int64和bool。注意:由于当前框架的protobuf传输数据限制,float64数据会被转化为float32数据
- **output** (Tensor,可选) - 输出Tensor。如果为None,则创建一个新的Tensor作为输出Tensor,默认值为None
- **x** (Tensor|np.ndarray|list|tuple|scalar) - 输入 Tensor,或 numpy 数组,或由基本数据组成的 list/tuple,或基本数据,支持数据类型为 float32、float64int32、int64 和 bool。注意:由于当前框架的 protobuf 传输数据限制,float64 数据会被转化为 float32 数据
- **output** (Tensor,可选) - 输出 Tensor。如果为 None,则创建一个新的 Tensor 作为输出 Tensor。默认值为 None

返回
::::::::::::
输出Tensor,形状、数据类型、数据值和 ``x`` 一致。
Tensor,形状、数据类型和值与 ``x`` 一致。


代码示例
::::::::::::

.. code-block:: python

import paddle
import numpy as np
data = paddle.full(shape=[3, 2], fill_value=2.5, dtype='float64') # [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
array = np.array([[1, 1],
[3, 4],
[1, 3]]).astype(np.int64)
result1 = paddle.zeros(shape=[3, 3], dtype='float32')
paddle.assign(array, result1) # result1 = [[1, 1], [3 4], [1, 3]]
result2 = paddle.assign(data) # result2 = [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
result3 = paddle.assign(np.array([[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]], dtype='float32')) # result3 = [[2.5, 2.5], [2.5, 2.5], [2.5, 2.5]]
COPY-FROM: paddle.assign:assign-example
46 changes: 10 additions & 36 deletions docs/api/paddle/nn/AdaptiveAvgPool1D_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,59 +4,33 @@
AdaptiveAvgPool1D
-------------------------------

.. py:function:: paddle.nn.AdaptiveAvgPool1D(output_size, name=None)
.. py:class:: paddle.nn.AdaptiveAvgPool1D(output_size, name=None)

该算子根据输入 `x` , `output_size` 等参数对一个输入Tensor计算1D的自适应平均池化。输入和输出都是3-D Tensor,
默认是以 `NCL` 格式表示的,其中 `N` 是 batch size, `C` 是通道数, `L` 是输入特征的长度.
根据 `output_size` 对一个输入 Tensor 计算 1D 的自适应平均汇聚。输入和输出都是以 NCL 格式表示的 3-D Tensor,其中 N 是批大小,C 是通道数而 L 是特征的长度。输出的形状是 :math:`[N, C, output\_size]`。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

池化是比较常见的说法,汇聚是哪里的解释呢?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

“汇聚”来自李航博士等人的推荐翻译。

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

经过内部讨论,建议还是使用通用的翻译吧,原因有两个:

  • 这套翻译术语目前暂未大规模推广开来,直接修改更容易增加理解成本,和文档「辅助开发者使用」的目的相悖;
  • 飞桨有制定深度学习常用术语表,在表中写明了pooling使用通用的说法池化,飞桨的大量文档也是使用了这样的说法,全量修改是一个大工程。

如果未来这套术语得到国内开发者的广泛认可或官方推广,我们会考虑全量更新,但就目前的情况来看,可能时机尚不成熟~

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我先将“汇聚”改回为“池化”,但我不接受这个建议。如果连飞桨这样的平台都不去推广这样一个更合适的翻译,那它如何才能被国内开发者广泛知晓呢?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的,我们后续会做更多调研,来评估是否推广这一套翻译,感谢指出


计算公式如下:
计算公式为

.. math::

lstart &= floor(i * L_{in} / L_{out})
lstart &= \lfloor i * L_{in} / L_{out}\rfloor,

lend &= ceil((i + 1) * L_{in} / L_{out})
lend &= \lceil(i + 1) * L_{in} / L_{out}\rceil,

Output(i) &= \frac{\sum Input[lstart:lend]}{lend - lstart}
Output(i) &= \frac{\sum Input[lstart:lend]}{lend - lstart}.


参数
:::::::::
- **output_size** (int): 算子输出特征图的长度,其数据类型为int。
- **name** (str,可选): 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。

形状
:::::::::
- **x** (Tensor): 默认形状为(批大小,通道数,输出特征长度),即NCL格式的3-D Tensor。 其数据类型为float32或者float64。
- **output** (Tensor): 默认形状为(批大小,通道数,输出特征长度),即NCL格式的3-D Tensor。 其数据类型与输入x相同。
- **output_size** (int) - 输出特征的长度,数据类型为 int。
- **name** (str,可选) - 具体用法请参见 :ref:`api_guide_Name`,一般无需设置,默认值为 None。

返回
:::::::::
计算AdaptiveAvgPool1D的可调用对象
用于计算 1D 自适应平均汇聚的可调用对象。


代码示例
:::::::::

.. code-block:: python

# average adaptive pool1d
# suppose input data in shape of [N, C, L], `output_size` is m or [m],
# output shape is [N, C, m], adaptive pool divide L dimension
# of input data into m grids averagely and performs poolings in each
# grid to get output.
# adaptive avg pool performs calculations as follow:
#
# for i in range(m):
# lstart = floor(i * L / m)
# lend = ceil((i + 1) * L / m)
# output[:, :, i] = sum(input[:, :, lstart: lend])/lend - lstart)
#
import paddle
import paddle.nn as nn


data = paddle.to_tensor(paddle.uniform(shape = [1, 3, 32], min = -1, max = 1, dtype = "float32"))
AdaptiveAvgPool1D = nn.layer.AdaptiveAvgPool1D(output_size=16)
pool_out = AdaptiveAvgPool1D(data)
# pool_out shape: [1, 3, 16]
COPY-FROM: paddle.nn.AdaptiveAvgPool1D:AdaptiveAvgPool1D-example
36 changes: 7 additions & 29 deletions docs/api/paddle/nn/functional/adaptive_avg_pool1d_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,45 +6,23 @@ adaptive_avg_pool1d

.. py:function:: paddle.nn.functional.adaptive_avg_pool1d(x, output_size, name=None)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

建议同步修改AdaptiveAvgPool2D & adaptive_avg_pool2d、AdaptiveAvgPool3D & adaptive_avg_pool3d,问题是一样的

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

还是留给接受“池化”这个翻译的同学去修改吧。


该算子根据输入 `x` , `output_size` 等参数对一个输入Tensor计算1D的自适应平均池化。输入和输出都是3-D Tensor,
默认是以 `NCL` 格式表示的,其中 `N` 是 batch size, `C` 是通道数, `L` 是输入特征的长度.
根据 `output_size` 对 Tensor `x` 计算 1D 自适应平均汇聚。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rst解析中,灰色底块x的效果需要两个「``」,一个的效果是斜体

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

完成,原先确实不知道这一点。


.. note::
详细请参考对应的 `Class` 请参考: :ref:`cn_api_nn_AdaptiveAvgPool1D`
详细请参考对应的 `Class` 请参考: :ref:`cn_api_nn_AdaptiveAvgPool1D`。


参数
:::::::::
- **x** (Tensor): 当前算子的输入, 其是一个形状为 `[N, C, L]` 的3-D Tensor其中 `N` 是batch size, `C` 是通道数, `L` 是输入特征的长度。 其数据类型为float32或者float64
- **output_size** (int): 算子输出特征图的长度,其数据类型为int
- **name** (str,可选): 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
- **x** (Tensor) - 自适应平均汇聚的输入,它是形状为 :math:`[N,C,L]` 的 3-D Tensor其中 :math:`N` 是批大小,:math:`C` 是通道数而 :math:`L` 是输入特征的长度,其数据类型为 float32 或 float64
- **output_size** (int) - 输出特征的长度,数据类型为 int
- **name** (str,可选) - 具体用法请参见 :ref:`api_guide_Name`,一般无需设置,默认值为 None

返回
:::::::::
``Tensor``, 输入 `x` 经过自适应池化计算得到的目标3-D Tensor,其数据类型与输入相同
Tensor,计算 1D 自适应平均汇聚的结果,数据类型与输入相同


代码示例
:::::::::

.. code-block:: python

# average adaptive pool1d
# suppose input data in shape of [N, C, L], `output_size` is m,
# output shape is [N, C, m], adaptive pool divide L dimension
# of input data into m grids averagely and performs poolings in each
# grid to get output.
# adaptive avg pool performs calculations as follow:
#
# for i in range(m):
# lstart = floor(i * L / m)
# lend = ceil((i + 1) * L / m)
# output[:, :, i] = sum(input[:, :, lstart: lend])/(lstart - lend)
#
import paddle
import paddle.nn.functional as F
import numpy as np

data = paddle.to_tensor(np.random.uniform(-1, 1, [1, 3, 32]).astype(np.float32))
pool_out = F.adaptive_avg_pool1d(data, output_size=16)
# pool_out shape: [1, 3, 16])
COPY-FROM: paddle.nn.functional.adaptive_avg_pool1d:adaptive_avg_pool1d-example
38 changes: 8 additions & 30 deletions docs/api/paddle/nn/initializer/XavierNormal_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,48 +6,26 @@ XavierNormal
.. py:class:: paddle.nn.initializer.XavierNormal(fan_in=None, fan_out=None, name=None)


该类实现Xavier权重初始化方法( Xavier weight initializer),Xavier权重初始化方法出自Xavier Glorot和Yoshua Bengio的论文 `Understanding the difficulty of training deep feedforward neural networks <http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf>`_
使用正态分布的泽维尔权重初始化方法。泽维尔权重初始化方法出自泽维尔·格洛特和约书亚·本吉奥的论文 `Understanding the difficulty of training deep feedforward neural networks <http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf>`_
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

句尾标点符号

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

完成。


该初始化函数用于保持所有层的梯度尺度几乎一致。

正态分布的情况下,均值为0,标准差为:
该初始化函数用于保持所有层的梯度尺度几乎一致。所使用的正态分布的的均值为 :math:`0`,标准差为

.. math::

x = \sqrt{\frac{2.0}{fan\_in+fan\_out}}
x = \sqrt{\frac{2.0}{fan\_in+fan\_out}}.

参数
::::::::::::

- **fan_in** (float,可选) - 用于Xavier初始化的fan_in,从tensor中推断。默认为None
- **fan_out** (float,可选) - 用于Xavier初始化的fan_out,从tensor中推断。默认为None
- **name** str,可选- 具体用法请参见 :ref:`api_guide_Name`,一般无需设置,默认值为None
- **fan_in** (float,可选) - 用于泽维尔初始化的 fan_in,从 Tensor 中推断,默认值为 None
- **fan_out** (float,可选) - 用于泽维尔初始化的 fan_out,从 Tensor 中推断,默认值为 None
- **name** (str,可选) - 具体用法请参见 :ref:`api_guide_Name`,一般无需设置,默认值为 None

返回
::::::::::::

由使用正态分布的Xavier权重初始化的参数
由使用正态分布的泽维尔权重初始化的参数

代码示例
::::::::::::

.. code-block:: python

import paddle

data = paddle.ones(shape=[3, 1, 2], dtype='float32')
weight_attr = paddle.framework.ParamAttr(
name="linear_weight",
initializer=paddle.nn.initializer.XavierNormal())
bias_attr = paddle.framework.ParamAttr(
name="linear_bias",
initializer=paddle.nn.initializer.XavierNormal())
linear = paddle.nn.Linear(2, 2, weight_attr=weight_attr, bias_attr=bias_attr)
# inear.weight: [[ 0.06910077 -0.18103665]
# [-0.02546741 -1.0402188 ]]
# linear.bias: [-0.5012929 0.12418364]

res = linear(data)
# res: [[[-0.4576595 -1.0970719]]
# [[-0.4576595 -1.0970719]]
# [[-0.4576595 -1.0970719]]]
COPY-FROM: paddle.nn.initializer.XavierNormal:initializer_XavierNormal-example
38 changes: 8 additions & 30 deletions docs/api/paddle/nn/initializer/XavierUniform_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,48 +6,26 @@ XavierUniform
.. py:class:: paddle.nn.initializer.XavierUniform(fan_in=None, fan_out=None, name=None)


该类实现Xavier权重初始化方法( Xavier weight initializer),Xavier权重初始化方法出自Xavier Glorot和Yoshua Bengio的论文 `Understanding the difficulty of training deep feedforward neural networks <http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf>`_
使用均匀分布的泽维尔权重初始化方法。泽维尔权重初始化方法出自泽维尔·格洛特和约书亚·本吉奥的论文 `Understanding the difficulty of training deep feedforward neural networks <http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf>`_

该初始化函数用于保持所有层的梯度尺度几乎一致。

在均匀分布的情况下,取值范围为[-x,x],其中:
该初始化函数用于保持所有层的梯度尺度几乎一致。在均匀分布的情况下,取值范围为 :math:`[-x,x]`,其中

.. math::

x = \sqrt{\frac{6.0}{fan\_in+fan\_out}}
x = \sqrt{\frac{6.0}{fan\_in+fan\_out}}.

参数
::::::::::::

- **fan_in** (float,可选) - 用于Xavier初始化的fan_in,从tensor中推断。默认为None
- **fan_out** (float,可选) - 用于Xavier初始化的fan_out,从tensor中推断。默认为None
- **name** str,可选- 具体用法请参见 :ref:`api_guide_Name`,一般无需设置,默认值为None
- **fan_in** (float,可选) - 用于泽维尔初始化的 fan_in,从 Tensor 中推断,默认值为 None
- **fan_out** (float,可选) - 用于泽维尔初始化的 fan_out,从 Tensor 中推断,默认值为 None
- **name** (str,可选) - 具体用法请参见 :ref:`api_guide_Name`,一般无需设置,默认值为 None

返回
::::::::::::

由使用均匀分布的Xavier权重初始化的参数
由使用均匀分布的泽维尔权重初始化方法得到的参数

代码示例
::::::::::::

.. code-block:: python

import paddle

data = paddle.ones(shape=[3, 1, 2], dtype='float32')
weight_attr = paddle.framework.ParamAttr(
name="linear_weight",
initializer=paddle.nn.initializer.XavierUniform())
bias_attr = paddle.framework.ParamAttr(
name="linear_bias",
initializer=paddle.nn.initializer.XavierUniform())
linear = paddle.nn.Linear(2, 2, weight_attr=weight_attr, bias_attr=bias_attr)
# linear.weight: [[-0.04229349 -1.1248565 ]
# [-0.10789523 -0.5938053 ]]
# linear.bias: [ 1.1983747 -0.40201235]

res = linear(data)
# res: [[[ 1.0481861 -2.1206741]]
# [[ 1.0481861 -2.1206741]]
# [[ 1.0481861 -2.1206741]]]
COPY-FROM: paddle.nn.initializer.XavierUniform:initializer_XavierUniform-example
42 changes: 12 additions & 30 deletions docs/api/paddle/where_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,50 +8,32 @@ where

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.. py:function:: paddle.where(condition, x=None, y=None, name=None)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

完成



返回一个根据输入 ``condition``, 选择 ``x`` 或 ``y`` 的元素组成的多维 ``Tensor`` :
根据 ``condition`` 来选择 ``x`` 或 ``y`` 中的对应元素来组成新的 Tensor。具体地,

.. math::
Out_i =
\left\{
\begin{aligned}
&X_i, & & if \ cond_i \ is \ True \\
&Y_i, & & if \ cond_i \ is \ False \\
\end{aligned}
\right.
out_i =
\begin{cases}
x_i, & \text{if} \ condition_i \ \text{is} \ True \\
y_i, & \text{if} \ condition_i \ \text{is} \ False \\
\end{cases}

.. note::
``numpy.where(condition)`` 功能与 ``paddle.nonzero(condition, as_tuple=True)`` 相同。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

paddle.nonzero 最好加上超链接,可参考文档

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

完成。


参数
::::::::::::

- **condition** Tensor- 选择 ``x`` 或 ``y`` 元素的条件 。为 ``True`` (非零值)时,选择 ``x`` ,否则选择 ``y`` 。
- **x** Tensor,Scalar,可选)- 多维 ``Tensor````Scalar``,数据类型为 ``float32`` 或 ``float64`` 或 ``int32````int64`` 。``x`` 和 ``y`` 必须都给出或者都不给出。
- **y** Tensor,Scalar,可选)- 多维 ``Tensor````Scalar``,数据类型为 ``float32`` 或 ``float64`` 或 ``int32````int64`` 。``x`` 和 ``y`` 必须都给出或者都不给出。
- **name** str,可选)- 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。
- **condition** (Tensor) - 选择 ``x`` 或 ``y`` 元素的条件。在为 True(非零值)时,选择 ``x`` ,否则选择 ``y`` 。
- **x** (Tensor|scalar,可选) - 条件为 True 时选择的 Tensor 或 scalar,数据类型为 float32float64int32 或 int64。``x`` 和 ``y`` 必须都给出或者都不给出。
- **y** (Tensor|scalar,可选) - 条件为 False 时选择的 Tensor 或 scalar,数据类型为 float32float64int32 或 int64。``x`` 和 ``y`` 必须都给出或者都不给出。
- **name** (str,可选) - 具体用法请参见 :ref:`api_guide_Name`,一般无需设置,默认值为 None

返回
::::::::::::
Tensor,数据类型与 ``x`` 相同的 ``Tensor`` 。
Tensor,形状与 ``condition`` 相同,数据类型与 ``x`` ``y`` 相同



代码示例
::::::::::::

.. code-block:: python

import paddle

x = paddle.to_tensor([0.9383, 0.1983, 3.2, 1.2])
y = paddle.to_tensor([1.0, 1.0, 1.0, 1.0])
out = paddle.where(x>1, x, y)

print(out)
#out: [1.0, 1.0, 3.2, 1.2]

out = paddle.where(x>1)
print(out)
#out: (Tensor(shape=[2, 1], dtype=int64, place=CPUPlace, stop_gradient=True,
# [[2],
# [3]]),)
COPY-FROM: paddle.where:where-example