-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
always create network with ipv6 #1526
always create network with ipv6 #1526
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: BenTheElder The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
FYI @amwat lol 😂 |
/hold |
return exec.Command("docker", "network", "create", "-d=bridge", "--ipv6", "--subnet=fc00:db8:2::/64", name).Run() | ||
} | ||
return exec.Command("docker", "network", "create", "-d=bridge", name).Run() | ||
return exec.Command("docker", "network", "create", "-d=bridge", "--ipv6", "--subnet=fc00:db8:2::/64", name).Run() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can't create the same range in all networks, we should add some bytes hashing the name or something
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is true, however this is not new to this PR, and currently there is a single caller with a fixed value.
There's an existing TODO here.
I'm limiting this PR to just resolving the aspect of "ipv4 clusters before ipv6 clusters => bad state"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ups, I forgot that was hardcoded now XD
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what happens if the host does not have ipv6?
the command will fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bad wording, not that IPv6, the host IPv6
net.ipv6.conf.all.disable_ipv6 = 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it still works but i'm not sure that setting is sufficient to test e.g. lack of modules
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
$ sudo sysctl net.ipv6.conf.all.disable_ipv6
net.ipv6.conf.all.disable_ipv6 = 1
$ docker exec kind-control-plane ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: veth94b73da2@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 82:b9:38:07:5d:5b brd ff:ff:ff:ff:ff:ff link-netns cni-7e2721d4-185b-b646-b43f-41f797ddfa65
inet 10.244.0.1/32 brd 10.244.0.1 scope global veth94b73da2
valid_lft forever preferred_lft forever
3: vetheb4ad147@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 4e:c3:91:c2:4a:c0 brd ff:ff:ff:ff:ff:ff link-netns cni-ad9efa8f-e278-bed2-a72d-f634c116ede3
inet 10.244.0.1/32 brd 10.244.0.1 scope global vetheb4ad147
valid_lft forever preferred_lft forever
4: veth39517578@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:c6:7a:67:8c:b0 brd ff:ff:ff:ff:ff:ff link-netns cni-fbf3ca43-0688-1ff0-09ba-c90621b6407e
inet 10.244.0.1/32 brd 10.244.0.1 scope global veth39517578
valid_lft forever preferred_lft forever
17: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fc00:db8:2::2/64 scope global nodad
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe12:2/64 scope link
valid_lft forever preferred_lft forever
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's fair enough, I think that we can move forward with this :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm definitely nervous about that, maybe I can fire up a COS GKE node and see if it still works
sanity checked that this PR works fine on docker desktop (macOS), ubuntu 20.04 / docker-ce, neither with ipv6 """enabled""". |
/hold cancel |
we'll want to update the docs after the release, you don't need to muck with daemon config to do ipv6 anymore 🎉 we can also consider removing that hack from KRTE |
/override pull-kind-conformance-parallel-1-12 |
@BenTheElder: Overrode contexts on behalf of BenTheElder: pull-kind-conformance-parallel-1-12, pull-kind-conformance-parallel-1-13, pull-kind-conformance-parallel-1-14 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/test pull-kind-e2e-kubernetes |
/retest |
hmm are we sure? |
|
i'm having difficulty proving it, but AFAICT docker ipv6 daemon options are just for the default bridge. ( |
yeah, I think that they've moved to libnetwork the new networking and those are legacy options, but is just my impression |
I brought up a multi-node ipv6 cluster on a machine with default docker (no config file). |
it worked on a COS node (1.14.10-gke.27), though it appears the ip6tables modules are loaded on my clean cluster 🤷 |
/hold cancel |
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Hey there, This just broke my kind deployment since I have ipv6 disable from |
I tested this with ipv6 disable=1 (see discussion above), can you share more details about your environment? |
As an immediate workaround you can:
|
kind will only create the network if it does not exist. |
Is because it has disable it in grub .... I'm at mobile now but I think we should to probe ipv6 or try to create the network in ipv6 and fall back to ipv4 |
i'll send a patch and we can cut an 0.8.1 ... |
^ this. ipv6 is disabled at kernel level and not sysctl. I liked the previous way where you could configure your ipFamily to Btw, thanks for the almost immediate reply and for the amazing project :) |
Created #1544 to keep track. |
🤦 thanks, I think we can come up with something equivilant. the tricky part is that the dockerd may not be running on the same machine as |
🙃 thanks |
/shrug
turns out "enabling ipv6" is only for the default bridge ...?
so we can just always create the kind network with ipv6 enabled.