You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
this works OK on machine which have eth0 with IPv4 or both IPv4 and IPv6 addresses. It doesn't work on machines which have only IPv6 address. The error message is:
it fails in distributed/utils.get_ip_interface() with "interface eth0 doesn't have an IPv4 address" message
Traceback (most recent call last):
File "/infrastructure/nambar/pyapitest/test_dask.py", line 7, in <module>
cluster = NetbatchCluster(queue='iil_critical', qslot="/admin/nambar", log_directory="/tmp",
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/infrastructure/nambar/pyapitest/.venv/lib/python3.12/site-packages/nbdask/nbdask.py", line 144, in __init__
super().__init__(name=name, config_name="netbatch", log_directory=log_directory,
File "/infrastructure/nambar/pyapitest/.venv/lib/python3.12/site-packages/dask_jobqueue/core.py", line 663, in __init__
super().__init__(
File "/infrastructure/nambar/pyapitest/.venv/lib/python3.12/site-packages/distributed/deploy/spec.py", line 284, in __init__
self.sync(self._start)
File "/infrastructure/nambar/pyapitest/.venv/lib/python3.12/site-packages/distributed/utils.py", line 364, in sync
return sync(
^^^^^
File "/infrastructure/nambar/pyapitest/.venv/lib/python3.12/site-packages/distributed/utils.py", line 440, in sync
raise error
File "/infrastructure/nambar/pyapitest/.venv/lib/python3.12/site-packages/distributed/utils.py", line 414, in f
result = yield future
^^^^^^^^^^^^
File "/infrastructure/nambar/pyapitest/.venv/lib/python3.12/site-packages/tornado/gen.py", line 766, in run
value = future.result()
^^^^^^^^^^^^^^^
File "/infrastructure/nambar/pyapitest/.venv/lib/python3.12/site-packages/distributed/deploy/spec.py", line 335, in _start
raise RuntimeError(f"Cluster failed to start: {e}") from e
RuntimeError: Cluster failed to start: interface 'eth0' doesn't have an IPv4 address
Example of machine with both IPv4 and IPv6
Example of machine with IPv6 only:
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 30:3e:a7:00:67:62 brd ff:ff:ff:ff:ff:ff
altname enp8s0f0
altname ens2f0
The distributed package code seems to indeed check only for IPv6 but Dask documentation states that both IPv4 and IPv6 are supported. Is IPv6 expected to be supported?
Dask version: 2024.10.0
dask-jobqueue 0.9.0
Python version: 3.12.3
Operating System: SUSE Linux Enterprise Server 15 SP4
Install method (conda, pip, source): pip
The text was updated successfully, but these errors were encountered:
We've implemented a custom scheduler for dask and configured it with:
this works OK on machine which have eth0 with IPv4 or both IPv4 and IPv6 addresses. It doesn't work on machines which have only IPv6 address. The error message is:
it fails in distributed/utils.get_ip_interface() with "interface eth0 doesn't have an IPv4 address" message
Example of machine with both IPv4 and IPv6
Example of machine with IPv6 only:
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 30:3e:a7:00:67:62 brd ff:ff:ff:ff:ff:ff
altname enp8s0f0
altname ens2f0
The distributed package code seems to indeed check only for IPv6 but Dask documentation states that both IPv4 and IPv6 are supported. Is IPv6 expected to be supported?
The text was updated successfully, but these errors were encountered: