Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytest collection errors after following setup guidelines #5212

Open
GenevieveBuckley opened this issue Aug 16, 2021 · 5 comments
Open

pytest collection errors after following setup guidelines #5212

GenevieveBuckley opened this issue Aug 16, 2021 · 5 comments
Labels
documentation Improve or add to documentation

Comments

@GenevieveBuckley
Copy link
Contributor

What happened:
After following the new contributor guidelines for distributed, pytest finds both errors in test collection and failing tests.

What you expected to happen:
I had expected the pytest suite to complete without failures.

Presumably there is something wonky with my sockets, but I haven't done anything strange recently and should have the defaults for everything in Ubuntu.

Minimal Complete Verifiable Example:

git clone git@github.com:dask/distributed.git
cd distributed
conda env create --file continuous_integration/environment-3.8.yaml
conda activate dask-distributed
python -m pip install -e .
py.test distributed --verbose
pytest collection errors:
$ py.test distributed --verbose
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /home/genevieve/anaconda3/envs/dask-distributed/bin/python
cachedir: .pytest_cache
rootdir: /home/genevieve/GitHub/temp/distributed, configfile: setup.cfg
plugins: asyncio-0.12.0, rerunfailures-10.1, repeat-0.8.0, timeout-1.4.2
timeout: 300.0s
timeout method: thread
timeout func_only: False
collected 2180 items / 2 errors / 10 skipped / 2168 selected                   

==================================== ERRORS ====================================
____________ ERROR collecting distributed/comm/tests/test_comms.py _____________
distributed/utils.py:133: in _get_ip
    sock.connect((host, port))
E   OSError: [Errno 101] Network is unreachable

During handling of the above exception, another exception occurred:
distributed/comm/tests/test_comms.py:47: in <module>
    EXTERNAL_IP6 = get_ipv6()
distributed/utils.py:164: in get_ipv6
    return _get_ip(host, port, family=socket.AF_INET6)
cytoolz/functoolz.pyx:476: in cytoolz.functoolz._memoize.__call__
    ???
distributed/utils.py:142: in _get_ip
    addr_info = socket.getaddrinfo(
../../../anaconda3/envs/dask-distributed/lib/python3.8/socket.py:918: in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
E   socket.gaierror: [Errno -5] No address associated with hostname
______________ ERROR collecting distributed/comm/tests/test_ws.py ______________
distributed/utils.py:133: in _get_ip
    sock.connect((host, port))
E   OSError: [Errno 101] Network is unreachable

During handling of the above exception, another exception occurred:
distributed/comm/tests/test_ws.py:23: in <module>
    from .test_comms import check_tls_extra
<frozen importlib._bootstrap>:991: in _find_and_load
    ???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
    ???
<frozen importlib._bootstrap>:671: in _load_unlocked
    ???
../../../anaconda3/envs/dask-distributed/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:170: in exec_module
    exec(co, module.__dict__)
distributed/comm/tests/test_comms.py:47: in <module>
    EXTERNAL_IP6 = get_ipv6()
distributed/utils.py:164: in get_ipv6
    return _get_ip(host, port, family=socket.AF_INET6)
cytoolz/functoolz.pyx:476: in cytoolz.functoolz._memoize.__call__
    ???
distributed/utils.py:142: in _get_ip
    addr_info = socket.getaddrinfo(
../../../anaconda3/envs/dask-distributed/lib/python3.8/socket.py:918: in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
E   socket.gaierror: [Errno -5] No address associated with hostname
=========================== short test summary info ============================
SKIPPED [1] distributed/comm/tests/test_ucx.py:7: could not import 'ucp': No module named 'ucp'
SKIPPED [1] distributed/comm/tests/test_ucx_config.py:19: could not import 'ucp': No module named 'ucp'
SKIPPED [1] distributed/diagnostics/tests/test_nvml.py:7: could not import 'pynvml': No module named 'pynvml'
SKIPPED [1] distributed/protocol/tests/test_arrow.py:3: could not import 'pyarrow': No module named 'pyarrow'
SKIPPED [1] distributed/protocol/tests/test_cupy.py:9: could not import 'cupy': No module named 'cupy'
SKIPPED [1] distributed/protocol/tests/test_keras.py:3: could not import 'keras': No module named 'tensorflow'
SKIPPED [1] distributed/protocol/tests/test_numba.py:9: could not import 'numba.cuda': No module named 'numba'
SKIPPED [1] distributed/protocol/tests/test_rmm.py:8: could not import 'numba.cuda': No module named 'numba'
SKIPPED [1] distributed/protocol/tests/test_sparse.py:4: could not import 'sparse': No module named 'sparse'
SKIPPED [1] distributed/protocol/tests/test_torch.py:6: could not import 'torch': No module named 'torch'
ERROR distributed/comm/tests/test_comms.py - socket.gaierror: [Errno -5] No a...
ERROR distributed/comm/tests/test_ws.py - socket.gaierror: [Errno -5] No addr...
!!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!!

Anything else we need to know?:

The test collection errors occur in pytest distributed/comm/tests

Additionally, several cli tests fail on my machine as well: pytest distributed/cli/tests

FAILED distributed/cli/tests/test_dask_scheduler.py::test_defaults - socket.g...
FAILED distributed/cli/tests/test_dask_scheduler.py::test_hostport - socket.g...
FAILED distributed/cli/tests/test_dask_scheduler.py::test_no_dashboard - asse...
FAILED distributed/cli/tests/test_dask_scheduler.py::test_dashboard_whitelist
Details:
=================================== FAILURES ===================================
________________________________ test_defaults _________________________________

host = '2001:4860:4860::8888', port = 80, family = <AddressFamily.AF_INET6: 10>

    @toolz.memoize
    def _get_ip(host, port, family):
        # By using a UDP socket, we don't actually try to connect but
        # simply select the local address through which *host* is reachable.
        sock = socket.socket(family, socket.SOCK_DGRAM)
        try:
>           sock.connect((host, port))
E           OSError: [Errno 101] Network is unreachable

distributed/utils.py:133: OSError

During handling of the above exception, another exception occurred:

loop = <tornado.platform.asyncio.AsyncIOLoop object at 0x7f361c988f70>

    def test_defaults(loop):
        with popen(["dask-scheduler"]):
    
            async def f():
                # Default behaviour is to listen on all addresses
                await assert_can_connect_from_everywhere_4_6(8786, timeout=5.0)
    
            with Client(f"127.0.0.1:{Scheduler.default_port}", loop=loop) as c:
>               c.sync(f)

distributed/cli/tests/test_dask_scheduler.py:36: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
distributed/client.py:845: in sync
    return sync(
distributed/utils.py:325: in sync
    raise exc.with_traceback(tb)
distributed/utils.py:308: in f
    result[0] = yield future
../../../anaconda3/envs/dask-distributed/lib/python3.8/site-packages/tornado/gen.py:762: in run
    value = future.result()
distributed/cli/tests/test_dask_scheduler.py:33: in f
    await assert_can_connect_from_everywhere_4_6(8786, timeout=5.0)
distributed/utils_test.py:1193: in assert_can_connect_from_everywhere_4_6
    assert_can_connect("%s://[%s]:%d" % (protocol, get_ipv6(), port), **kwargs),
distributed/utils.py:164: in get_ipv6
    return _get_ip(host, port, family=socket.AF_INET6)
cytoolz/functoolz.pyx:476: in cytoolz.functoolz._memoize.__call__
    ???
distributed/utils.py:142: in _get_ip
    addr_info = socket.getaddrinfo(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

host = 'genevieve-G5-5500', port = 80, family = <AddressFamily.AF_INET6: 10>
type = <SocketKind.SOCK_DGRAM: 2>, proto = 17, flags = 0

    def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0):
        """Resolve host and port into list of address info entries.
    
        Translate the host/port argument into a sequence of 5-tuples that contain
        all the necessary arguments for creating a socket connected to that service.
        host is a domain name, a string representation of an IPv4/v6 address or
        None. port is a string service name such as 'http', a numeric port number or
        None. By passing None as the value of host and port, you can pass NULL to
        the underlying C API.
    
        The family, type and proto arguments can be optionally specified in order to
        narrow the list of addresses returned. Passing zero as a value for each of
        these arguments selects the full range of results.
        """
        # We override this function since we want to translate the numeric family
        # and socket type values to enum constants.
        addrlist = []
>       for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
E       socket.gaierror: [Errno -5] No address associated with hostname

../../../anaconda3/envs/dask-distributed/lib/python3.8/socket.py:918: gaierror
----------------------------- Captured stdout call -----------------------------


Print from stderr
  /
=================

distributed.scheduler - INFO - -----------------------------------------------
distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
/home/genevieve/GitHub/temp/distributed/distributed/node.py:160: UserWarning: Port 8787 is already in use.
Perhaps you already have a cluster running?
Hosting the HTTP server on port 33299 instead
  warnings.warn(
distributed.scheduler - INFO - -----------------------------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:  tcp://192.168.1.102:8786
distributed.scheduler - INFO -   dashboard at:                    :33299
distributed.scheduler - INFO - Receive client connection: Client-c457fb8c-fe6f-11eb-b6c8-e19f018f439a
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove client Client-c457fb8c-fe6f-11eb-b6c8-e19f018f439a
distributed.scheduler - INFO - Remove client Client-c457fb8c-fe6f-11eb-b6c8-e19f018f439a
distributed.scheduler - INFO - Close client connection: Client-c457fb8c-fe6f-11eb-b6c8-e19f018f439a
distributed.scheduler - INFO - End scheduler at 'tcp://192.168.1.102:8786'
Traceback (most recent call last):
  File "/home/genevieve/anaconda3/envs/dask-distributed/bin/dask-scheduler", line 33, in <module>
    sys.exit(load_entry_point('distributed', 'console_scripts', 'dask-scheduler')())
  File "/home/genevieve/GitHub/temp/distributed/distributed/cli/dask_scheduler.py", line 217, in go
    main()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/click/core.py", line 1137, in __call__
    return self.main(*args, **kwargs)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/click/core.py", line 1062, in main
    rv = self.invoke(ctx)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/click/core.py", line 763, in invoke
    return __callback(*args, **kwargs)
  File "/home/genevieve/GitHub/temp/distributed/distributed/cli/dask_scheduler.py", line 208, in main
    loop.run_sync(run)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/tornado/ioloop.py", line 529, in run_sync
    raise TimeoutError("Operation timed out after %s seconds" % timeout)
tornado.util.TimeoutError: Operation timed out after None seconds



Print from stdout
=================


________________________________ test_hostport _________________________________

host = '2001:4860:4860::8888', port = 80, family = <AddressFamily.AF_INET6: 10>

    @toolz.memoize
    def _get_ip(host, port, family):
        # By using a UDP socket, we don't actually try to connect but
        # simply select the local address through which *host* is reachable.
        sock = socket.socket(family, socket.SOCK_DGRAM)
        try:
>           sock.connect((host, port))
E           OSError: [Errno 101] Network is unreachable

distributed/utils.py:133: OSError

During handling of the above exception, another exception occurred:

loop = <tornado.platform.asyncio.AsyncIOLoop object at 0x7f361bae7610>

    def test_hostport(loop):
        with popen(["dask-scheduler", "--no-dashboard", "--host", "127.0.0.1:8978"]):
    
            async def f():
                # The scheduler's main port can't be contacted from the outside
                await assert_can_connect_locally_4(8978, timeout=5.0)
    
            with Client("127.0.0.1:8978", loop=loop) as c:
                assert len(c.nthreads()) == 0
>               c.sync(f)

distributed/cli/tests/test_dask_scheduler.py:51: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
distributed/client.py:845: in sync
    return sync(
distributed/utils.py:325: in sync
    raise exc.with_traceback(tb)
distributed/utils.py:308: in f
    result[0] = yield future
../../../anaconda3/envs/dask-distributed/lib/python3.8/site-packages/tornado/gen.py:762: in run
    value = future.result()
distributed/cli/tests/test_dask_scheduler.py:47: in f
    await assert_can_connect_locally_4(8978, timeout=5.0)
distributed/utils_test.py:1226: in assert_can_connect_locally_4
    assert_cannot_connect("tcp://[%s]:%d" % (get_ipv6(), port), **kwargs),
distributed/utils.py:164: in get_ipv6
    return _get_ip(host, port, family=socket.AF_INET6)
cytoolz/functoolz.pyx:476: in cytoolz.functoolz._memoize.__call__
    ???
distributed/utils.py:142: in _get_ip
    addr_info = socket.getaddrinfo(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

host = 'genevieve-G5-5500', port = 80, family = <AddressFamily.AF_INET6: 10>
type = <SocketKind.SOCK_DGRAM: 2>, proto = 17, flags = 0

    def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0):
        """Resolve host and port into list of address info entries.
    
        Translate the host/port argument into a sequence of 5-tuples that contain
        all the necessary arguments for creating a socket connected to that service.
        host is a domain name, a string representation of an IPv4/v6 address or
        None. port is a string service name such as 'http', a numeric port number or
        None. By passing None as the value of host and port, you can pass NULL to
        the underlying C API.
    
        The family, type and proto arguments can be optionally specified in order to
        narrow the list of addresses returned. Passing zero as a value for each of
        these arguments selects the full range of results.
        """
        # We override this function since we want to translate the numeric family
        # and socket type values to enum constants.
        addrlist = []
>       for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
E       socket.gaierror: [Errno -5] No address associated with hostname

../../../anaconda3/envs/dask-distributed/lib/python3.8/socket.py:918: gaierror
----------------------------- Captured stdout call -----------------------------


Print from stderr
  /
=================

distributed.scheduler - INFO - -----------------------------------------------
distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
/home/genevieve/GitHub/temp/distributed/distributed/node.py:160: UserWarning: Port 8787 is already in use.
Perhaps you already have a cluster running?
Hosting the HTTP server on port 45903 instead
  warnings.warn(
distributed.scheduler - INFO - -----------------------------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:      tcp://127.0.0.1:8978
distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45903
distributed.scheduler - INFO - Receive client connection: Client-c5649c23-fe6f-11eb-b6c8-e19f018f439a
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove client Client-c5649c23-fe6f-11eb-b6c8-e19f018f439a
distributed.scheduler - INFO - Remove client Client-c5649c23-fe6f-11eb-b6c8-e19f018f439a
distributed.scheduler - INFO - Close client connection: Client-c5649c23-fe6f-11eb-b6c8-e19f018f439a
distributed.scheduler - INFO - End scheduler at 'tcp://127.0.0.1:8978'
Traceback (most recent call last):
  File "/home/genevieve/anaconda3/envs/dask-distributed/bin/dask-scheduler", line 33, in <module>
    sys.exit(load_entry_point('distributed', 'console_scripts', 'dask-scheduler')())
  File "/home/genevieve/GitHub/temp/distributed/distributed/cli/dask_scheduler.py", line 217, in go
    main()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/click/core.py", line 1137, in __call__
    return self.main(*args, **kwargs)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/click/core.py", line 1062, in main
    rv = self.invoke(ctx)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/click/core.py", line 763, in invoke
    return __callback(*args, **kwargs)
  File "/home/genevieve/GitHub/temp/distributed/distributed/cli/dask_scheduler.py", line 208, in main
    loop.run_sync(run)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/tornado/ioloop.py", line 529, in run_sync
    raise TimeoutError("Operation timed out after %s seconds" % timeout)
tornado.util.TimeoutError: Operation timed out after None seconds



Print from stdout
=================


______________________________ test_no_dashboard _______________________________

loop = <tornado.platform.asyncio.AsyncIOLoop object at 0x7f361bbdc670>

    def test_no_dashboard(loop):
        with popen(["dask-scheduler", "--no-dashboard"]):
            with Client(f"127.0.0.1:{Scheduler.default_port}", loop=loop):
                response = requests.get("http://127.0.0.1:8787/status/")
>               assert response.status_code == 404
E               assert 200 == 404
E                 +200
E                 -404

distributed/cli/tests/test_dask_scheduler.py:58: AssertionError
----------------------------- Captured stdout call -----------------------------


Print from stderr
  /
=================

distributed.scheduler - INFO - -----------------------------------------------
distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
/home/genevieve/GitHub/temp/distributed/distributed/node.py:160: UserWarning: Port 8787 is already in use.
Perhaps you already have a cluster running?
Hosting the HTTP server on port 43953 instead
  warnings.warn(
distributed.scheduler - INFO - -----------------------------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:  tcp://192.168.1.102:8786
distributed.scheduler - INFO -   dashboard at:                    :43953
distributed.scheduler - INFO - Receive client connection: Client-c67c7632-fe6f-11eb-b6c8-e19f018f439a
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove client Client-c67c7632-fe6f-11eb-b6c8-e19f018f439a
distributed.scheduler - INFO - Remove client Client-c67c7632-fe6f-11eb-b6c8-e19f018f439a
distributed.scheduler - INFO - Close client connection: Client-c67c7632-fe6f-11eb-b6c8-e19f018f439a
distributed.scheduler - INFO - End scheduler at 'tcp://192.168.1.102:8786'
Traceback (most recent call last):
  File "/home/genevieve/anaconda3/envs/dask-distributed/bin/dask-scheduler", line 33, in <module>
    sys.exit(load_entry_point('distributed', 'console_scripts', 'dask-scheduler')())
  File "/home/genevieve/GitHub/temp/distributed/distributed/cli/dask_scheduler.py", line 217, in go
    main()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/click/core.py", line 1137, in __call__
    return self.main(*args, **kwargs)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/click/core.py", line 1062, in main
    rv = self.invoke(ctx)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/click/core.py", line 763, in invoke
    return __callback(*args, **kwargs)
  File "/home/genevieve/GitHub/temp/distributed/distributed/cli/dask_scheduler.py", line 208, in main
    loop.run_sync(run)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/tornado/ioloop.py", line 529, in run_sync
    raise TimeoutError("Operation timed out after %s seconds" % timeout)
tornado.util.TimeoutError: Operation timed out after None seconds



Print from stdout
=================


___________________________ test_dashboard_whitelist ___________________________

loop = <tornado.platform.asyncio.AsyncIOLoop object at 0x7f361bae7b50>

    @pytest.mark.skipif(not LINUX, reason="Need 127.0.0.2 to mean localhost")
    def test_dashboard_whitelist(loop):
        pytest.importorskip("bokeh")
        with pytest.raises(Exception):
>           requests.get("http://localhost:8787/status/").ok
E           Failed: DID NOT RAISE <class 'Exception'>

distributed/cli/tests/test_dask_scheduler.py:123: Failed

Environment:

  • Dask version: latest 2021.08.0
  • Python version: 3.8
  • Operating System: Ubuntu 20.04
  • Install method (conda, pip, source): conda environment creation, pip editable installation of distributed
@forana
Copy link

forana commented Aug 19, 2021

Also on ubuntu 20.04 and python 3.8.10 - seeing the same test failures.

@ncclementi
Copy link
Member

cc: @jrbourbeau

@QuLogic
Copy link
Contributor

QuLogic commented Aug 21, 2021

Try to export DISABLE_IPV6=1; see #4514

@GenevieveBuckley
Copy link
Contributor Author

Thanks @QuLogic export DISABLE_IPV6=1 does help with the test collection errors.

Still, lots of test failures now

Details:
pytest
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /home/genevieve/anaconda3/envs/dask-distributed/bin/python
cachedir: .pytest_cache
rootdir: /home/genevieve/GitHub/distributed, configfile: setup.cfg
plugins: asyncio-0.12.0, rerunfailures-10.1, repeat-0.8.0, timeout-1.4.2, anyio-3.3.0
timeout: 300.0s
timeout method: thread
timeout func_only: False
collected 2250 items / 10 skipped / 2240 selected                              

distributed/cli/tests/test_dask_scheduler.py::test_defaults FAILED       [  0%]
distributed/cli/tests/test_dask_scheduler.py::test_hostport FAILED       [  0%]
distributed/cli/tests/test_dask_scheduler.py::test_no_dashboard FAILED   [  0%]
distributed/cli/tests/test_dask_scheduler.py::test_dashboard FAILED      [  0%]
distributed/cli/tests/test_dask_scheduler.py::test_dashboard_non_standard_ports FAILED [  0%]
distributed/cli/tests/test_dask_scheduler.py::test_dashboard_whitelist FAILED [  0%]
distributed/cli/tests/test_dask_scheduler.py::test_interface FAILED      [  0%]
distributed/cli/tests/test_dask_scheduler.py::test_pid_file PASSED       [  0%]
distributed/cli/tests/test_dask_scheduler.py::test_scheduler_port_zero 
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++

~~~~~~~~~~~~~~~~~~~~~~ Stack of IO loop (140206412252928) ~~~~~~~~~~~~~~~~~~~~~~
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/threading.py", line 890, in _bootstrap
    self._bootstrap_inner()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/threading.py", line 932, in _bootstrap_inner
    self.run()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/home/genevieve/GitHub/distributed/distributed/utils.py", line 403, in run_loop
    loop.start()
  File "/home/genevieve/GitHub/distributed/distributed/utils_test.py", line 131, in start
    orig_start()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/tornado/platform/asyncio.py", line 199, in start
    self.asyncio_loop.run_forever()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/asyncio/base_events.py", line 570, in run_forever
    self._run_once()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/asyncio/base_events.py", line 1823, in _run_once
    event_list = self._selector.select(timeout)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/selectors.py", line 468, in select
    fd_event_list = self._selector.poll(timeout, max_ev)

~~~~~~~~~~~~~~~~ Stack of TCP-Executor-9675-1 (140205666182912) ~~~~~~~~~~~~~~~~
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/threading.py", line 890, in _bootstrap
    self._bootstrap_inner()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/threading.py", line 932, in _bootstrap_inner
    self.run()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/home/genevieve/GitHub/distributed/distributed/threadpoolexecutor.py", line 51, in _worker
    task = work_queue.get(timeout=1)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/queue.py", line 179, in get
    self.not_empty.wait(remaining)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/threading.py", line 306, in wait
    gotit = waiter.acquire(True, timeout)

~~~~~~~~~~~~~~~~ Stack of TCP-Executor-9675-0 (140206554863360) ~~~~~~~~~~~~~~~~
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/threading.py", line 890, in _bootstrap
    self._bootstrap_inner()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/threading.py", line 932, in _bootstrap_inner
    self.run()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/home/genevieve/GitHub/distributed/distributed/threadpoolexecutor.py", line 51, in _worker
    task = work_queue.get(timeout=1)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/queue.py", line 179, in get
    self.not_empty.wait(remaining)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/threading.py", line 306, in wait
    gotit = waiter.acquire(True, timeout)

~~~~~~~~~~~~~~~~~~ Stack of Dask-Offload_0 (140206127032064) ~~~~~~~~~~~~~~~~~~~
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/threading.py", line 890, in _bootstrap
    self._bootstrap_inner()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/threading.py", line 932, in _bootstrap_inner
    self.run()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/concurrent/futures/thread.py", line 78, in _worker
    work_item = work_queue.get(block=True)

~~~~~~~~~~~~~~~~~~~~ Stack of MainThread (140207498544960) ~~~~~~~~~~~~~~~~~~~~~
  File "/home/genevieve/anaconda3/envs/dask-distributed/bin/pytest", line 11, in <module>
    sys.exit(console_main())
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/_pytest/config/__init__.py", line 185, in console_main
    code = main()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/_pytest/config/__init__.py", line 162, in main
    ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/hooks.py", line 286, in __call__
    return self._hookexec(self, self.get_hookimpls(), kwargs)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>
    self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/callers.py", line 187, in _multicall
    res = hook_impl.function(*args)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/_pytest/main.py", line 316, in pytest_cmdline_main
    return wrap_session(config, _main)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/_pytest/main.py", line 269, in wrap_session
    session.exitstatus = doit(config, session) or 0
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/_pytest/main.py", line 323, in _main
    config.hook.pytest_runtestloop(session=session)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/hooks.py", line 286, in __call__
    return self._hookexec(self, self.get_hookimpls(), kwargs)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>
    self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/callers.py", line 187, in _multicall
    res = hook_impl.function(*args)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/_pytest/main.py", line 348, in pytest_runtestloop
    item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/hooks.py", line 286, in __call__
    return self._hookexec(self, self.get_hookimpls(), kwargs)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>
    self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/callers.py", line 187, in _multicall
    res = hook_impl.function(*args)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/_pytest/runner.py", line 109, in pytest_runtest_protocol
    runtestprotocol(item, nextitem=nextitem)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/_pytest/runner.py", line 126, in runtestprotocol
    reports.append(call_and_report(item, "call", log))
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/_pytest/runner.py", line 215, in call_and_report
    call = call_runtest_hook(item, when, **kwds)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/_pytest/runner.py", line 254, in call_runtest_hook
    return CallInfo.from_call(
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/_pytest/runner.py", line 311, in from_call
    result: Optional[TResult] = func()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/_pytest/runner.py", line 255, in <lambda>
    lambda: ihook(item=item, **kwds), when=when, reraise=reraise
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/hooks.py", line 286, in __call__
    return self._hookexec(self, self.get_hookimpls(), kwargs)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>
    self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/callers.py", line 187, in _multicall
    res = hook_impl.function(*args)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/_pytest/runner.py", line 162, in pytest_runtest_call
    item.runtest()
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/_pytest/python.py", line 1641, in runtest
    self.ihook.pytest_pyfunc_call(pyfuncitem=self)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/hooks.py", line 286, in __call__
    return self._hookexec(self, self.get_hookimpls(), kwargs)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>
    self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/pluggy/callers.py", line 187, in _multicall
    res = hook_impl.function(*args)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/site-packages/_pytest/python.py", line 183, in pytest_pyfunc_call
    result = testfunction(**testargs)
  File "/home/genevieve/GitHub/distributed/distributed/cli/tests/test_dask_scheduler.py", line 210, in test_scheduler_port_zero
    with Client(scheduler_file=fn, loop=loop) as c:
  File "/home/genevieve/GitHub/distributed/distributed/client.py", line 764, in __init__
    self.start(timeout=timeout)
  File "/home/genevieve/GitHub/distributed/distributed/client.py", line 1010, in start
    sync(self.loop, self._start, **kwargs)
  File "/home/genevieve/GitHub/distributed/distributed/utils.py", line 323, in sync
    e.wait(10)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/threading.py", line 558, in wait
    signaled = self._cond.wait(timeout)
  File "/home/genevieve/anaconda3/envs/dask-distributed/lib/python3.8/threading.py", line 306, in wait
    gotit = waiter.acquire(True, timeout)

+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
(dask-distributed) genevieve@genevieve-G5-5500:~/GitHub/distributed$ 

@GenevieveBuckley GenevieveBuckley added the documentation Improve or add to documentation label Oct 18, 2021
@GenevieveBuckley
Copy link
Contributor Author

Perhaps we should add a note to the distributed developer installation docs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improve or add to documentation
Projects
None yet
Development

No branches or pull requests

4 participants