-
Notifications
You must be signed in to change notification settings - Fork 766
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
修复 paddle.assign 等 API 的文档 #4850
Changes from 8 commits
96898c9
668499b
d819ee7
92695c8
ac6b4b3
13b1cb6
0ac9ea4
78ff2d3
9719d05
0ad8aff
b0479c8
c6c7195
01d9d00
4166b46
0ab1f51
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -6,45 +6,23 @@ adaptive_avg_pool1d | |
|
||
.. py:function:: paddle.nn.functional.adaptive_avg_pool1d(x, output_size, name=None) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 建议同步修改AdaptiveAvgPool2D & adaptive_avg_pool2d、AdaptiveAvgPool3D & adaptive_avg_pool3d,问题是一样的 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 还是留给接受“池化”这个翻译的同学去修改吧。 |
||
|
||
该算子根据输入 `x` , `output_size` 等参数对一个输入Tensor计算1D的自适应平均池化。输入和输出都是3-D Tensor, | ||
默认是以 `NCL` 格式表示的,其中 `N` 是 batch size, `C` 是通道数, `L` 是输入特征的长度. | ||
根据 `output_size` 对 Tensor `x` 计算 1D 自适应平均汇聚。 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. rst解析中,灰色底块 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 完成,原先确实不知道这一点。 |
||
|
||
.. note:: | ||
详细请参考对应的 `Class` 请参考: :ref:`cn_api_nn_AdaptiveAvgPool1D` 。 | ||
详细请参考对应的 `Class` 请参考: :ref:`cn_api_nn_AdaptiveAvgPool1D`。 | ||
|
||
|
||
参数 | ||
::::::::: | ||
- **x** (Tensor): 当前算子的输入, 其是一个形状为 `[N, C, L]` 的3-D Tensor。其中 `N` 是batch size, `C` 是通道数, `L` 是输入特征的长度。 其数据类型为float32或者float64。 | ||
- **output_size** (int): 算子输出特征图的长度,其数据类型为int。 | ||
- **name** (str,可选): 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。 | ||
- **x** (Tensor) - 自适应平均汇聚的输入,它是形状为 :math:`[N,C,L]` 的 3-D Tensor,其中 :math:`N` 是批大小,:math:`C` 是通道数而 :math:`L` 是输入特征的长度,其数据类型为 float32 或 float64。 | ||
- **output_size** (int) - 输出特征的长度,数据类型为 int。 | ||
- **name** (str,可选) - 具体用法请参见 :ref:`api_guide_Name`,一般无需设置,默认值为 None。 | ||
|
||
返回 | ||
::::::::: | ||
``Tensor``, 输入 `x` 经过自适应池化计算得到的目标3-D Tensor,其数据类型与输入相同。 | ||
Tensor,计算 1D 自适应平均汇聚的结果,数据类型与输入相同。 | ||
|
||
|
||
代码示例 | ||
::::::::: | ||
|
||
.. code-block:: python | ||
|
||
# average adaptive pool1d | ||
# suppose input data in shape of [N, C, L], `output_size` is m, | ||
# output shape is [N, C, m], adaptive pool divide L dimension | ||
# of input data into m grids averagely and performs poolings in each | ||
# grid to get output. | ||
# adaptive avg pool performs calculations as follow: | ||
# | ||
# for i in range(m): | ||
# lstart = floor(i * L / m) | ||
# lend = ceil((i + 1) * L / m) | ||
# output[:, :, i] = sum(input[:, :, lstart: lend])/(lstart - lend) | ||
# | ||
import paddle | ||
import paddle.nn.functional as F | ||
import numpy as np | ||
|
||
data = paddle.to_tensor(np.random.uniform(-1, 1, [1, 3, 32]).astype(np.float32)) | ||
pool_out = F.adaptive_avg_pool1d(data, output_size=16) | ||
# pool_out shape: [1, 3, 16]) | ||
COPY-FROM: paddle.nn.functional.adaptive_avg_pool1d:adaptive_avg_pool1d-example |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -6,48 +6,26 @@ XavierNormal | |
.. py:class:: paddle.nn.initializer.XavierNormal(fan_in=None, fan_out=None, name=None) | ||
|
||
|
||
该类实现Xavier权重初始化方法( Xavier weight initializer),Xavier权重初始化方法出自Xavier Glorot和Yoshua Bengio的论文 `Understanding the difficulty of training deep feedforward neural networks <http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf>`_ | ||
使用正态分布的泽维尔权重初始化方法。泽维尔权重初始化方法出自泽维尔·格洛特和约书亚·本吉奥的论文 `Understanding the difficulty of training deep feedforward neural networks <http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf>`_ | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 句尾标点符号 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 完成。 |
||
|
||
该初始化函数用于保持所有层的梯度尺度几乎一致。 | ||
|
||
正态分布的情况下,均值为0,标准差为: | ||
该初始化函数用于保持所有层的梯度尺度几乎一致。所使用的正态分布的的均值为 :math:`0`,标准差为 | ||
|
||
.. math:: | ||
|
||
x = \sqrt{\frac{2.0}{fan\_in+fan\_out}} | ||
x = \sqrt{\frac{2.0}{fan\_in+fan\_out}}. | ||
|
||
参数 | ||
:::::::::::: | ||
|
||
- **fan_in** (float,可选) - 用于Xavier初始化的fan_in,从tensor中推断。默认为None。 | ||
- **fan_out** (float,可选) - 用于Xavier初始化的fan_out,从tensor中推断。默认为None。 | ||
- **name** (str,可选)- 具体用法请参见 :ref:`api_guide_Name`,一般无需设置,默认值为None。 | ||
- **fan_in** (float,可选) - 用于泽维尔初始化的 fan_in,从 Tensor 中推断,默认值为 None。 | ||
- **fan_out** (float,可选) - 用于泽维尔初始化的 fan_out,从 Tensor 中推断,默认值为 None。 | ||
- **name** (str,可选) - 具体用法请参见 :ref:`api_guide_Name`,一般无需设置,默认值为 None。 | ||
|
||
返回 | ||
:::::::::::: | ||
|
||
由使用正态分布的Xavier权重初始化的参数。 | ||
由使用正态分布的泽维尔权重初始化的参数。 | ||
|
||
代码示例 | ||
:::::::::::: | ||
|
||
.. code-block:: python | ||
|
||
import paddle | ||
|
||
data = paddle.ones(shape=[3, 1, 2], dtype='float32') | ||
weight_attr = paddle.framework.ParamAttr( | ||
name="linear_weight", | ||
initializer=paddle.nn.initializer.XavierNormal()) | ||
bias_attr = paddle.framework.ParamAttr( | ||
name="linear_bias", | ||
initializer=paddle.nn.initializer.XavierNormal()) | ||
linear = paddle.nn.Linear(2, 2, weight_attr=weight_attr, bias_attr=bias_attr) | ||
# inear.weight: [[ 0.06910077 -0.18103665] | ||
# [-0.02546741 -1.0402188 ]] | ||
# linear.bias: [-0.5012929 0.12418364] | ||
|
||
res = linear(data) | ||
# res: [[[-0.4576595 -1.0970719]] | ||
# [[-0.4576595 -1.0970719]] | ||
# [[-0.4576595 -1.0970719]]] | ||
COPY-FROM: paddle.nn.initializer.XavierNormal:initializer_XavierNormal-example |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -8,50 +8,32 @@ where | |
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. .. py:function:: paddle.where(condition, x=None, y=None, name=None) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 完成 |
||
|
||
|
||
返回一个根据输入 ``condition``, 选择 ``x`` 或 ``y`` 的元素组成的多维 ``Tensor`` : | ||
根据 ``condition`` 来选择 ``x`` 或 ``y`` 中的对应元素来组成新的 Tensor。具体地, | ||
|
||
.. math:: | ||
Out_i = | ||
\left\{ | ||
\begin{aligned} | ||
&X_i, & & if \ cond_i \ is \ True \\ | ||
&Y_i, & & if \ cond_i \ is \ False \\ | ||
\end{aligned} | ||
\right. | ||
out_i = | ||
\begin{cases} | ||
x_i, & \text{if} \ condition_i \ \text{is} \ True \\ | ||
y_i, & \text{if} \ condition_i \ \text{is} \ False \\ | ||
\end{cases} | ||
|
||
.. note:: | ||
``numpy.where(condition)`` 功能与 ``paddle.nonzero(condition, as_tuple=True)`` 相同。 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. paddle.nonzero 最好加上超链接,可参考文档 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 完成。 |
||
|
||
参数 | ||
:::::::::::: | ||
|
||
- **condition** (Tensor)- 选择 ``x`` 或 ``y`` 元素的条件 。为 ``True`` (非零值)时,选择 ``x`` ,否则选择 ``y`` 。 | ||
- **x** (Tensor,Scalar,可选)- 多维 ``Tensor`` 或 ``Scalar``,数据类型为 ``float32`` 或 ``float64`` 或 ``int32`` 或 ``int64`` 。``x`` 和 ``y`` 必须都给出或者都不给出。 | ||
- **y** (Tensor,Scalar,可选)- 多维 ``Tensor`` 或 ``Scalar``,数据类型为 ``float32`` 或 ``float64`` 或 ``int32`` 或 ``int64`` 。``x`` 和 ``y`` 必须都给出或者都不给出。 | ||
- **name** (str,可选)- 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。 | ||
- **condition** (Tensor) - 选择 ``x`` 或 ``y`` 元素的条件。在为 True(非零值)时,选择 ``x`` ,否则选择 ``y`` 。 | ||
- **x** (Tensor|scalar,可选) - 条件为 True 时选择的 Tensor 或 scalar,数据类型为 float32、float64、int32 或 int64。``x`` 和 ``y`` 必须都给出或者都不给出。 | ||
- **y** (Tensor|scalar,可选) - 条件为 False 时选择的 Tensor 或 scalar,数据类型为 float32、float64、int32 或 int64。``x`` 和 ``y`` 必须都给出或者都不给出。 | ||
- **name** (str,可选) - 具体用法请参见 :ref:`api_guide_Name`,一般无需设置,默认值为 None。 | ||
|
||
返回 | ||
:::::::::::: | ||
Tensor,数据类型与 ``x`` 相同的 ``Tensor`` 。 | ||
Tensor,形状与 ``condition`` 相同,数据类型与 ``x`` 和 ``y`` 相同。 | ||
|
||
|
||
|
||
代码示例 | ||
:::::::::::: | ||
|
||
.. code-block:: python | ||
|
||
import paddle | ||
|
||
x = paddle.to_tensor([0.9383, 0.1983, 3.2, 1.2]) | ||
y = paddle.to_tensor([1.0, 1.0, 1.0, 1.0]) | ||
out = paddle.where(x>1, x, y) | ||
|
||
print(out) | ||
#out: [1.0, 1.0, 3.2, 1.2] | ||
|
||
out = paddle.where(x>1) | ||
print(out) | ||
#out: (Tensor(shape=[2, 1], dtype=int64, place=CPUPlace, stop_gradient=True, | ||
# [[2], | ||
# [3]]),) | ||
COPY-FROM: paddle.where:where-example |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
池化是比较常见的说法,汇聚是哪里的解释呢?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
“汇聚”来自李航博士等人的推荐翻译。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
经过内部讨论,建议还是使用通用的翻译吧,原因有两个:
如果未来这套术语得到国内开发者的广泛认可或官方推广,我们会考虑全量更新,但就目前的情况来看,可能时机尚不成熟~
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
我先将“汇聚”改回为“池化”,但我不接受这个建议。如果连飞桨这样的平台都不去推广这样一个更合适的翻译,那它如何才能被国内开发者广泛知晓呢?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
好的,我们后续会做更多调研,来评估是否推广这一套翻译,感谢指出