Skip to content

Commit

Permalink
[Refactor] LAZY_LEGACY_OP=False (#1832)
Browse files Browse the repository at this point in the history
  • Loading branch information
vmoens authored Jan 29, 2024
1 parent c2fae32 commit 9da61f2
Show file tree
Hide file tree
Showing 40 changed files with 79 additions and 36 deletions.
2 changes: 1 addition & 1 deletion .github/unittest/linux/scripts/run_all.sh
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ export DISPLAY=:0
export SDL_VIDEODRIVER=dummy

# legacy from bash scripts: remove?
conda env config vars set MUJOCO_GL=$MUJOCO_GL PYOPENGL_PLATFORM=$MUJOCO_GL DISPLAY=:0 SDL_VIDEODRIVER=dummy
conda env config vars set MUJOCO_GL=$MUJOCO_GL PYOPENGL_PLATFORM=$MUJOCO_GL DISPLAY=:0 SDL_VIDEODRIVER=dummy LAZY_LEGACY_OP=False

pip3 install pip --upgrade
pip install virtualenv
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_distributed/scripts/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ lib_dir="${env_dir}/lib"
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$lib_dir
export MKL_THREADING_LAYER=GNU
export CKPT_BACKEND=torch
export LAZY_LEGACY_OP=False
export BATCHED_PIPE_TIMEOUT=60

python .github/unittest/helpers/coverage_run_parallel.py -m pytest test/smoke_test.py -v --durations 200
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_examples/scripts/run_all.sh
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,7 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$root_dir/.mujoco/mujoco210/bin
export SDL_VIDEODRIVER=dummy
export MUJOCO_GL=egl
export PYOPENGL_PLATFORM=egl
export LAZY_LEGACY_OP=False

conda env config vars set MUJOCO_PY_MUJOCO_PATH=$root_dir/.mujoco/mujoco210 \
DISPLAY=unix:0.0 \
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_ataridqn/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ conda activate ./env
apt-get update && apt-get remove swig -y && apt-get install -y git gcc patchelf libosmesa6-dev libgl1-mesa-glx libglfw3 swig3.0
ln -s /usr/bin/swig3.0 /usr/bin/swig

export LAZY_LEGACY_OP=False
export PYTORCH_TEST_WITH_SLOW='1'
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_brax/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ conda activate ./env


export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_d4rl/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ cd ..
#cd ..

export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_envpool/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ eval "$(./conda/bin/conda shell.bash hook)"
conda activate ./env

export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_gen-dgrl/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ apt-get update && apt-get remove swig -y && apt-get install -y git gcc patchelf
ln -s /usr/bin/swig3.0 /usr/bin/swig

export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_gym/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ eval "$(./conda/bin/conda shell.bash hook)"
conda activate ./env

export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_habitat/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ conda env config vars set LD_PRELOAD=$LD_PRELOAD:$STDC_LOC
STDC_LOC=$(find conda/ -name "libstdc++.so.6" | head -1)

export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_jumanji/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ apt-get update && apt-get install -y git wget


export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_minari/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ apt-get update && apt-get remove swig -y && apt-get install -y git gcc patchelf
ln -s /usr/bin/swig3.0 /usr/bin/swig

export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_openx/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ apt-get update && apt-get remove swig -y && apt-get install -y git gcc patchelf
ln -s /usr/bin/swig3.0 /usr/bin/swig

export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_pettingzoo/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ apt-get update && apt-get install -y git wget


export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_rlhf/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ apt-get update && apt-get install -y git gcc
ln -s /usr/bin/swig3.0 /usr/bin/swig

export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
3 changes: 2 additions & 1 deletion .github/unittest/linux_libs/scripts_robohive/setup_env.sh
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,8 @@ conda env config vars set \
DISPLAY=unix:0.0 \
PYOPENGL_PLATFORM=egl \
NVIDIA_PATH=/usr/src/nvidia-470.63.01 \
sim_backend=MUJOCO
sim_backend=MUJOCO \
LAZY_LEGACY_OP=False

# make env variables apparent
conda deactivate && conda activate "${env_dir}"
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_roboset/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ apt-get update && apt-get remove swig -y && apt-get install -y git gcc patchelf
ln -s /usr/bin/swig3.0 /usr/bin/swig

export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_sklearn/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ apt-get update && apt-get install -y git gcc
ln -s /usr/bin/swig3.0 /usr/bin/swig

export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_smacv2/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ apt-get update && apt-get install -y git wget


export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_vd4rl/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ apt-get update && apt-get remove swig -y && apt-get install -y git gcc patchelf
ln -s /usr/bin/swig3.0 /usr/bin/swig

export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_libs/scripts_vmas/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ apt-get update && apt-get install -y git wget


export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ eval "$(./conda/bin/conda shell.bash hook)"
conda activate ./env

export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/linux_optdeps/scripts/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ conda activate ./env
STDC_LOC=$(find conda/ -name "libstdc++.so.6" | head -1)

export PYTORCH_TEST_WITH_SLOW='1'
export LAZY_LEGACY_OP=False
python -m torch.utils.collect_env
# Avoid error: "fatal: unsafe repository"
git config --global --add safe.directory '*'
Expand Down
1 change: 1 addition & 0 deletions .github/unittest/windows_optdepts/scripts/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ source "$this_dir/set_cuda_envs.sh"
export CKPT_BACKEND=torch
export MAX_IDLE_COUNT=60
export BATCHED_PIPE_TIMEOUT=60
export LAZY_LEGACY_OP=False

python -m torch.utils.collect_env
pytest --junitxml=test-results/junit.xml -v --durations 200 --ignore test/test_distributed.py --ignore test/test_rlhf.py
5 changes: 3 additions & 2 deletions test/test_collector.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@
MultiKeyCountingEnvPolicy,
NestedCountingEnv,
)
from tensordict import LazyStackedTensorDict
from tensordict.nn import TensorDictModule, TensorDictSequential
from tensordict.tensordict import assert_allclose_td, TensorDict

Expand Down Expand Up @@ -1896,7 +1897,7 @@ def test_aggregate_reset_to_root(self):
},
[1, 2],
)
td = torch.stack([td0, td1], 0)
td = LazyStackedTensorDict.lazy_stack([td0, td1], 0)
assert _aggregate_end_of_traj(td).all()

def test_aggregate_reset_to_root_keys(self):
Expand Down Expand Up @@ -1991,7 +1992,7 @@ def test_aggregate_reset_to_root_keys(self):
},
[1, 2],
)
td = torch.stack([td0, td1], 0)
td = LazyStackedTensorDict.lazy_stack([td0, td1], 0)
assert _aggregate_end_of_traj(td, reset_keys=["_reset", ("a", "_reset")]).all()

def test_aggregate_reset_to_root_errors(self):
Expand Down
11 changes: 6 additions & 5 deletions test/test_env.py
Original file line number Diff line number Diff line change
Expand Up @@ -1141,10 +1141,12 @@ def test_steptensordict(
tds[0]["this", "one"] = torch.zeros(2)
tds[1]["but", "not", "this", "one"] = torch.ones(2)
tds[0]["next", "this", "one"] = torch.ones(2) * 2
tensordict = torch.stack(tds, 0)
tensordict = LazyStackedTensorDict.lazy_stack(tds, 0)
next_tensordict = TensorDict({}, [4]) if has_out else None
if has_out and lazy_stack:
next_tensordict = torch.stack(next_tensordict.unbind(0), 0)
next_tensordict = LazyStackedTensorDict.lazy_stack(
next_tensordict.unbind(0), 0
)
out = step_mdp(
tensordict.lock_(),
keep_other=keep_other,
Expand Down Expand Up @@ -1498,8 +1500,7 @@ def test_heterogeenous(
[td_batch_size],
)
)
lazy_td = torch.stack(tds, dim=1)
input_td = lazy_td
lazy_td = LazyStackedTensorDict.lazy_stack(tds, dim=1)

td = step_mdp(
lazy_td.lock_(),
Expand Down Expand Up @@ -1785,7 +1786,7 @@ def main_penv(j, q=None):
r_p.append(env_s.rollout(100, break_when_any_done=False, policy=policy))
r_s.append(env_p.rollout(100, break_when_any_done=False, policy=policy))

td_equals = torch.stack(r_p).contiguous() == torch.stack(r_s).contiguous()
td_equals = torch.stack(r_p) == torch.stack(r_s)
if td_equals.all():
if q is not None:
q.put(("passed", j))
Expand Down
4 changes: 2 additions & 2 deletions test/test_libs.py
Original file line number Diff line number Diff line change
Expand Up @@ -555,7 +555,7 @@ def non_null_obs(batched_td):
env_type = type(env0._env)

assert_allclose_td(*tdreset, rtol=RTOL, atol=ATOL)
tdrollout = torch.stack(tdrollout, 0).contiguous()
tdrollout = torch.stack(tdrollout, 0)

# custom filtering of non-null obs: mujoco rendering sometimes fails
# and renders black images. To counter this in the tests, we select
Expand Down Expand Up @@ -597,7 +597,7 @@ def non_null_obs(batched_td):
assert_allclose_td(tdreset[0], tdreset2, rtol=RTOL, atol=ATOL)
assert final_seed0 == final_seed2
# same magic trick for mujoco as above
tdrollout = torch.stack([tdrollout[0], rollout2], 0).contiguous()
tdrollout = torch.stack([tdrollout[0], rollout2], 0)
idx = non_null_obs(tdrollout)
assert_allclose_td(
tdrollout[0][..., idx], tdrollout[1][..., idx], rtol=RTOL, atol=ATOL
Expand Down
4 changes: 2 additions & 2 deletions test/test_shared.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

import pytest
import torch
from tensordict import TensorDict
from tensordict import LazyStackedTensorDict, TensorDict
from torch import multiprocessing as mp


Expand Down Expand Up @@ -81,7 +81,7 @@ def remote_process(command_pipe_child, command_pipe_parent, tensordict):
command_pipe_parent.close()
assert isinstance(tensordict, TensorDict), f"td is of type {type(tensordict)}"
assert tensordict.is_shared() or tensordict.is_memmap()
new_tensordict = torch.stack(
new_tensordict = LazyStackedTensorDict.lazy_stack(
[
tensordict[i].contiguous().clone().zero_()
for i in range(tensordict.shape[0])
Expand Down
2 changes: 1 addition & 1 deletion test/test_transforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -791,7 +791,7 @@ def test_transform_model(self, dim, N, padding):
model(tdbase0)
tdbase0.batch_size = [10]
tdbase0 = tdbase0.expand(5, 10)
tdbase0_copy = tdbase0.transpose(0, 1).to_tensordict()
tdbase0_copy = tdbase0.transpose(0, 1)
tdbase0.refine_names("time", None)
tdbase0_copy.names = [None, "time"]
v1 = model(tdbase0)
Expand Down
2 changes: 1 addition & 1 deletion torchrl/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@

from torch import multiprocessing as mp

set_lazy_legacy(True).set()
set_lazy_legacy(False).set()

if torch.cuda.device_count() > 1:
n = torch.cuda.device_count() - 1
Expand Down
5 changes: 4 additions & 1 deletion torchrl/collectors/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@
from typing import Callable

import torch

from tensordict import set_lazy_legacy
from tensordict.tensordict import pad, TensorDictBase


Expand All @@ -25,6 +27,7 @@ def stacked_output_fun(*args, **kwargs):
return stacked_output_fun


@set_lazy_legacy(False)
def split_trajectories(
rollout_tensordict: TensorDictBase, prefix=None
) -> TensorDictBase:
Expand Down Expand Up @@ -88,7 +91,7 @@ def split_trajectories(
),
)
if rollout_tensordict.ndimension() == 1:
rollout_tensordict = rollout_tensordict.unsqueeze(0).to_tensordict()
rollout_tensordict = rollout_tensordict.unsqueeze(0)
return rollout_tensordict.unflatten_keys(sep)
out_splits = rollout_tensordict.view(-1).split(splits, 0)

Expand Down
8 changes: 5 additions & 3 deletions torchrl/envs/batched_envs.py
Original file line number Diff line number Diff line change
Expand Up @@ -348,7 +348,7 @@ def _check_for_empty_spec(specs: CompositeSpec):
self.done_spec = output_spec["full_done_spec"]

self._dummy_env_str = str(meta_data[0])
self._env_tensordict = torch.stack(
self._env_tensordict = LazyStackedTensorDict.lazy_stack(
[meta_data.tensordict for meta_data in meta_data], 0
)
self._batch_locked = meta_data[0].batch_locked
Expand Down Expand Up @@ -463,7 +463,7 @@ def _create_td(self) -> None:
)
for tensordict in shared_tensordict_parent
]
shared_tensordict_parent = torch.stack(
shared_tensordict_parent = LazyStackedTensorDict.lazy_stack(
shared_tensordict_parent,
0,
)
Expand All @@ -474,7 +474,9 @@ def _create_td(self) -> None:
self.shared_tensordicts = [
td.clone() for td in self.shared_tensordict_parent.unbind(0)
]
self.shared_tensordict_parent = torch.stack(self.shared_tensordicts, 0)
self.shared_tensordict_parent = LazyStackedTensorDict.lazy_stack(
self.shared_tensordicts, 0
)
else:
# Multi-task: we share tensordict that *may* have different keys
# LazyStacked already stores this so we don't need to do anything
Expand Down
11 changes: 7 additions & 4 deletions torchrl/envs/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
import numpy as np
import torch
import torch.nn as nn
from tensordict import unravel_key
from tensordict import LazyStackedTensorDict, unravel_key
from tensordict.tensordict import TensorDictBase
from tensordict.utils import NestedKey
from torchrl._utils import _replace_last, implement_for, prod, seed_generator
Expand Down Expand Up @@ -2307,9 +2307,12 @@ def rollout(
else:
tensordicts = self._rollout_nonstop(**kwargs)
batch_size = self.batch_size if tensordict is None else tensordict.batch_size
out_td = torch.stack(tensordicts, len(batch_size), out=out)
if return_contiguous:
out_td = out_td.contiguous()
out_td = torch.stack(tensordicts, len(batch_size), out=out)
else:
out_td = LazyStackedTensorDict.lazy_stack(
tensordicts, len(batch_size), out=out
)
out_td.refine_names(..., "time")
return out_td

Expand Down Expand Up @@ -2408,7 +2411,7 @@ def step_and_maybe_reset(
... for i in range(n):
... data, data_ = env.step_and_maybe_reset(data_)
... result.append(data)
... return torch.stack(result).contiguous()
... return torch.stack(result)
>>> env = ParallelEnv(2, lambda: GymEnv("CartPole-v1"))
>>> print(rollout(env, 2))
TensorDict(
Expand Down
Loading

2 comments on commit 9da61f2

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Performance Alert ⚠️

Possible performance regression was detected for benchmark 'CPU Benchmark Results'.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 2.

Benchmark suite Current: 9da61f2 Previous: c2fae32 Ratio
benchmarks/test_replaybuffer_benchmark.py::test_rb_sample[TensorDictReplayBuffer-ListStorage-RandomSampler-4000] 348.0321473760917 iter/sec (stddev: 0.0002506600019040202) 731.8580030244002 iter/sec (stddev: 0.00002810424872660433) 2.10
benchmarks/test_replaybuffer_benchmark.py::test_rb_sample[TensorDictReplayBuffer-ListStorage-SamplerWithoutReplacement-4000] 371.52921471213494 iter/sec (stddev: 0.00022713818266228766) 758.5672041757425 iter/sec (stddev: 0.00011267940432240235) 2.04
benchmarks/test_replaybuffer_benchmark.py::test_rb_iterate[TensorDictReplayBuffer-ListStorage-RandomSampler-4000] 367.72149961403636 iter/sec (stddev: 0.00012154523163631797) 736.5762208766832 iter/sec (stddev: 0.00004301955397168821) 2.00
benchmarks/test_replaybuffer_benchmark.py::test_rb_iterate[TensorDictReplayBuffer-ListStorage-SamplerWithoutReplacement-4000] 360.50277953484994 iter/sec (stddev: 0.0002440204196285399) 761.6587386717717 iter/sec (stddev: 0.00013353047245138507) 2.11

This comment was automatically generated by workflow using github-action-benchmark.

CC: @vmoens

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Performance Alert ⚠️

Possible performance regression was detected for benchmark 'GPU Benchmark Results'.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 2.

Benchmark suite Current: 9da61f2 Previous: c2fae32 Ratio
benchmarks/test_replaybuffer_benchmark.py::test_rb_sample[TensorDictReplayBuffer-ListStorage-SamplerWithoutReplacement-4000] 273.5205647934591 iter/sec (stddev: 0.0002980492833751097) 556.0965316949768 iter/sec (stddev: 0.0001386683108469203) 2.03

This comment was automatically generated by workflow using github-action-benchmark.

CC: @vmoens

Please sign in to comment.