Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Service reorg ideas #2585

Closed
bgrant0607 opened this issue Nov 24, 2014 · 81 comments
Closed

Service reorg ideas #2585

bgrant0607 opened this issue Nov 24, 2014 · 81 comments
Assignees
Labels
area/api Indicates an issue on api area. area/usability priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@bgrant0607
Copy link
Member

Forking from #2358 . Copying verbatim. Will follow with more concrete proposal.
/cc @smarterclayton @thockin @jbeda

v1beta3 service spec (which is very similar to v1beta1/2):

type ServiceSpec struct {
    Port int `json:"port" yaml:"port"`
    Protocol Protocol `json:"protocol,omitempty" yaml:"protocol,omitempty"`
    Selector map[string]string `json:"selector,omitempty" yaml:"selector,omitempty"`
    PortalIP string `json:"portalIP,omitempty" yaml:"portalIP,omitempty"`
    CreateExternalLoadBalancer bool `json:"createExternalLoadBalancer,omitempty" yaml:"createExternalLoadBalancer,omitempty"`
    PublicIPs []string `json:"publicIPs,omitempty" yaml:"publicIPs,omitempty"`
    ContainerPort util.IntOrString `json:"containerPort,omitempty" yaml:"containerPort,omitempty"`
}

Naming (DNS and links variables) isn't currently configurable or optional, but should be, for things like non-Kubernetes services or headless services.

As discussed in #2319, exposed addresses (PortalIP, PublicIPs) should be unified into a single list from which the system can allocate. We may want to be able to allocate multiple IPs, for DNS load balancing. For headless services, it should also be possible to disable IP allocation. I need to think more about how/whether nominal services (#260) would fit in.

DNS name to IP(s)

As in #2358, these IPs may even map to non-Kubernetes resources.

IPs to endpoints

These IPs may be intra-cluster, accessible via tunnels/proxies, externally visible but ephemeral (dynamically allocated for lifetime of the service), or fully externally visible and stable. #2209 proposed "every node" services for an authentication daemon, and we've discussed using magic nonroutable IP addresses like 169.254.169.253 (DNS) and 169.254.169.254 (metadata) for system services.

We should support ipv6 as well as ipv4.

Pool that the IPs come from

For external IP resources, they may or may not map to specific nodes, would have firewall port ranges, and may have other auth policies attached (e.g., who is allowed to use them). Addresses that map to specific nodes will require reserving the specific requested ports from those nodes.
#2738 proposed multiple IP address "portals", where the addresses could be drawn from pools of addresses that were either internally or externally visible. Portal types could be pluggable, like volumes are intended to be.

IP addresses should be entirely optional in order to support headless services #1607.

As per #1802, we should support multiple ports -- a list of (Port, ContainerPort, Protocol) tuples. And, it should be possible to specify no ports and just deal with addresses. Note that it's useful to specify higher-level protocols, such as HTTP/HTTPS, for proxies, UIs, etc.

Ports and protocols

Target IPs should be specifiable by selector, inline IP and/or name list, and external source, by POSTing/PUTting them to the endpoints API and/or by telling the system to watch an external endpoints API (which I'd like to standardize beyond just K8s).

Endpoints defined by selector or other

I expect in the future that we may want to differentiate the single-target "routing" case (e.g., singleton services, master-elected services, nominal services) from the load-balancing case, for at least 2 reasons:

  1. Users will want to plug in their own load balancers with more sophisticated/custom policies, either running as pods (e.g., HAProxy, nginx) or as services (either from a cloud provider or otherwise outside k8s)
  2. Rather than using a combination of SDN, proxy, and iptables for point-to-point routing, some providers may want pure SDN-based solutions, such as using OVS.

For a case where someone plugs in their own load balancer, that would require 2 services with the current approach, I think, one to target the load balancer and another to generate the endpoint list of its targets.

FWIW, in GCE, this is factored into forwarding rules and target pools/instances, with load balancing configuration associated with the target pool:
https://cloud.google.com/compute/docs/load-balancing/network/forwarding-rules

With all of the above, is just specifying an external load balancer enough for the case of cloud-provided L3 forwarding and/or balancing? Maybe.

L3 load balancing policy

L7 is #561. L7 balancers (HAProxy, nginx) should be able to consume endpoints and route directly to pod IPs.

For reference, OpenShift router: openshift/origin#514

As per #620, we need readiness checks, for lots of reasons (rolling updates, auto-scaling, disruptive minion management, ...) in addition to load balancing.

Sticky sessions were requested in #2867 and were added by #2875.

We should resolve #983 (port defaulting) in this, also.

@goltermann goltermann added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Nov 26, 2014
@bgrant0607 bgrant0607 added sig/network Categorizes an issue or PR as relevant to SIG Network. area/api Indicates an issue on api area. labels Dec 4, 2014
@bgrant0607 bgrant0607 assigned bgrant0607 and unassigned bgrant0607 Dec 4, 2014
@jbeda
Copy link
Contributor

jbeda commented Dec 9, 2014

Copy/pasting comments from #2738.

@jbeda:

Not sure what issue this belongs to, but I'm thinking that we restructure the Service object. I talked with @brendandburns and I think he is on board.

Here are some ideas:

  • we should split out "desired state" from "current state". (This is done in v1beta3 with spec/status)
  • Reuse the PodPorts that are already on the pod
  • Have a set of portals that expose the service both inside the cluster and out. Each portal can choose the ports to expose -- either by name or protocol and port number.
  • Have different types of portals similar to volumes
  • Reflect data about the IPs/ports picked in the status
kind: service
spec:
  selector:
    tier: frontend
  portals:
    - internalPortal: {}  # This will expose every port internally on a "portal IP"
    - name: admin # Name is optional but probably useful
      externalPortal: 
        pool: corp  # Indicate that the IP/ports for this should be pulled from a pool
    - name: www
      externalPortal:
        IP: 1.2.3.4 # We are naming the IP specifically, no pool needed. 
        ports:
          - portal: 80
            pod: 'http' # We are using the named port in the pod spec
          - portal: 443
            pod: 'https'
    - name: gce_www
      gceLoadBalancer:
        ports: [ 'http', 'https' ] # Since GCE L3 doesn't do port mapping, don't allow that to be specified

Finally, when the user does a GET on the resource, they'll see:

kind: service
spec:
  # [snip]
status:
  portals:
    - IP: 10.260.0.1
      internalPortal: {}
      ports:
        - name: 'http'
          portalPort: 80
          podPort: 80
          protocol: TCP
        - name: 'https'
          portalPort: 443
          podPort: 443
          protocol: TCP
        - name: 'admin_http'
          portalPort: 8080
          podPort: 8080
          protocol: TCP
    - name: admin
      IP: 192.168.1.100  # RFC 1918 range, but perhaps HA inside a corp
      externalPortal: {}
      ports:
        - name: 'http'
          portalPort: 6190  # Note the port mapping here
          podPort: 80
          protocol: TCP
        - name: 'https'
          portalPort: 6191
          podPort: 443
          protocol: TCP
        - name: 'admin_http'
          portalPort: 6192
          podPort: 8080
          protocol: TCP
    - name: www
      externalPortal: {}
      IP: 1.2.3.4
        - name: 'http'
          portalPort: 80
          podPort: 80
          protocol: TCP
        - name: 'https'
          portalPort: 443
          podPort: 443
          protocol: TCP
    - name: gce_www
      gceLoadBalancer:
        resourceLink: https://googleapis.com/compute/v1/...
      IP: 2.3.4.5
        - name: 'http'
          portalPort: 80
          podPort: 80
          protocol: TCP
        - name: 'https'
          portalPort: 443
          podPort: 443
          protocol: TCP      

Does this parse for folks? I'm happy to move this off to a new issue if there is a better place to discuss this.

@lavalamp:

I like explicitly listing the portals, that's a big improvement.

I think there's some more work to be done defining the contents of portals, though. Having portals inside ports inside portals is confusing.

@jbeda:

I don't think we have portals inside of ports. Basically it is like this:

Each service exports a set of ports. These ports are defined based on the pods that the service points too. (Open question on what to do if the pods aren't uniform. Perhaps this won't work. But we give ports on pods names, so we should try and use them if we can.)

Each service has a list of portals In status, a portal has an IP

Each portal is of a type Similar to volume, this is done with a "subtype". This can include extra info like the name of the LB that was created on your behalf for GCE.

Each portal has a port map -- what ports are exposed on that portal and how they map to the pods.

What can we remove to help simplify this?

@lavalamp:

Specifically, I was referring to this sequence in your example:

  portals:
    - name: www
      externalPortal:
        ports:
          - portal: 80
            pod: 'http' # We are using the named port in the pod spec

Is the idea behind the portal/port at the end the port that is exposed and the port that it is routed to? Maybe this is clearer:

        ports:
          - expose: 80
            routeTo: 'http' # We are using the named port in the pod spec

@jbeda:

Ah -- Gotcha. there, the last portal could be called portalPort. This is essentially a mapping from the port that the portal exposes to the port to use on the pod. expose sounds good but I'm not a fan of route in this context. We can bike shed on names after we get structure down.

@stp-ip:

Just to get a sense of the possibilities here.
We have internal services, with an internal IP.
We have exposed services, which use the proxy/portal to map the internal IP to an external IP.
We have internal services consuming external services. Would these be mapped via the portals too?
So instead of internal -> portal -> external, one would use external -> portal -> internal service.
This would make the overall structure more streamlined in my opinion.

Additionally with the portal and it's mapping of IP:Port to IP:Port, the next possibility could be routing.
Not just a mapping of the external IP to a service, but a mapping of example.com/service1 to service1 and example.com/service2 to service 2 for example. This would enable the definition of simplistic routing at the edge of the kubernetes cluster, perhaps even with the support from LBs.

@bgrant0607
Copy link
Member Author

In the last comment, I think @stp-ip was asking for what is essentially OpenShift's Route type:

type Route struct {
    TypeMeta   `json:",inline" yaml:",inline"`
    ObjectMeta `json:"metadata,omitempty" yaml:"metadata,omitempty"`
    // Required: Alias/DNS that points to the service
    // Can be host or host:port
    // host and port are combined to follow the net/url URL struct
    Host string `json:"host" yaml:"host"`
    // Optional: Path that the router watches for, to route traffic for to the service
    Path string `json:"path,omitempty" yaml:"path,omitempty"`
    // the name of the service that this route points to
    ServiceName string `json:"serviceName" yaml:"serviceName"`
}

Much like our service proxy watches endpoints, an HTTP reverse proxy, such as HAProxy, could watch routes.

@bgrant0607
Copy link
Member Author

First of all, bikeshedding about the name. I think "service" is too overloaded and brings too much baggage. Also, if we decompose this into multiple objects, they'll need more specific names. For at least the "frontend" portion, I propose Forwarder, akin to GCE's forwarding rules.

In terms of decomposing this, I think it divides nicely between the "frontend" parts and the "backend" target/endpoint specification, which could be inline but a separate object would make sense, too, in which case we could decouple production of the Endpoints list and consumption of it.

type EndpointsSpec struct {
    Selector map[string]string `json:"selector"`
    PodPorts []string `json:"podPorts,omitempty"` // port names
    // Explicit hosts or protocol://host:port, where hosts can be DNS names and/or IP addresses
    Endpoints []string `json:"endpoints,omitempty"`
}

UPDATE 01/14/2015: The manual Endpoints list above doesn't work well for multiple ports on the same IP address. It probably just needs to be a list of hosts. The PodPorts could be generalized to work for manual targets. They'd have to be protocol+port pairs.

@bgrant0607
Copy link
Member Author

We could distinguish internal vs. external portals by pool (i.e., have an internal pool).

I think GCELoadBalancer isn't something we want as a first-class API type. Ideally, we'd have a generic CloudBalancer that called into the cloudprovider API underneath, perhaps with plugins for cloud-specific parameters. Otherwise, the whole thing should just be a plugin.

@bgrant0607
Copy link
Member Author

I could also imagine a host pool, for contacting daemons on the same host, and support for mapping link-local addresses to per-node agents.

@bgrant0607
Copy link
Member Author

We should support an array of IP addresses rather than just a single address. This would permit DNS load balancing, such as in the case that the addresses don't map to Kubernetes backends.

I'd also like to support a one-to-one option on the portals, to indicate a nominal service (#260). In that case, no load-balancing details, such as SessionAffinity, would be specified. Perhaps those could be alternative MappingOptions.

I assume the portal names would be used to produce distinct DNS names for each portal. We may not support externally visible DNS yet, but I could imagine that we would in the future.

In the pod port specifications, it would be useful to indicate whether http/https were supported, in addition to TCP vs. UDP. Or perhaps we could accept HTTP and HTTPS in addition to just TCP.

We should add the ReadinessProbe spec to pods, also, which the Endpoints Controller could use. Since I've proposed we also target host:port, we might also need readiness probe spec in EndpointsSpec.

@bgrant0607
Copy link
Member Author

Another thought on provider-specific behavior: Ideally, the user could enumerate all provider-specific options, and the system would select out the appropriate ones based on which provider the cluster was actually running on. That would facilitate config reuse across multiple providers.

@bgrant0607
Copy link
Member Author

I thought of one difference between internal and external portals: For internal portals, we may want to add a whitelist of namespaces that could access the service and/or a selector to indicate which pods within the namespace should be permitted access. Note that the latter can't provide real protection until/unless we enforce application of labels by label namespace.

@bgrant0607
Copy link
Member Author

It would also be useful for a service provider to publish fairly arbitrary metadata to be consumed by clients, such as client connection parameters, shard maps, keys to look up resource usage data, etc. Not exactly labels or annotations, but would be a map of string to string (cf. DNS TXT key-value data). Maybe "serviceParameters" or "clientConfiguration". If we wanted it to be customizable per pod, it would need to be pulled from the pods, or even from the containers themselves, perhaps similar to readiness. Our internal RPC, load balancing, sharding, and other communication libraries/components depend more and more on this type of thing.

@bgrant0607
Copy link
Member Author

Starting with the easy part: the "backend". I think we have consensus that this is at least a logically separable part of the service. Certainly it's independently useful (#1607). My previous stab at this was here: #2585 (comment)

The only thing I'd add is the key-value payload described here: #2585 (comment)

An alternative name could be TargetController, TargetWatcher, .. See #3024 for general name ideas. But, if we change it, we should change "Endpoints" also, since the main job of this is to generate the Endpoints lists.

In the future, we could add configurable policies about what to filter from the endpoints list, such as whether to use readiness (#620) or not.

Like in the proposal to split out the pod template from ReplicationController (#170), we could support both an inline version of this in the other part of Service and a reference to a separate object.

Will work on the other part(s) of Service now.

/cc @thockin @jbeda

@bgrant0607
Copy link
Member Author

An observation: If we went with an independent EndpointsController object, the different portals shown in @jbeda's example #2585 (comment) could be all separate objects referring to the same EndpointsController object.

@bgrant0607
Copy link
Member Author

First stab at fully granular version (specs only):

//L7: host:port/path to portal mapping [we may not support L7, but here for completeness]
type RouteSpec struct {
    // Required: Alias/DNS that points to the service
    // Can be host or host:port
    // host and port are combined to follow the net/url URL struct
    Host string `json:"host"`
    // Optional: Path that the router watches for, to route traffic for to the service
    Path string `json:"path,omitempty"`
    // the reference to the portal that this route points to
    PortalRef  *ObjectReference `json:"templateRef,omitempty"`
}
//DNS to IPs mapping
type DNSSpec struct {
    Hostname string `json:"hostname"`
    // the reference to the portal that the name points to
    PortalRef  *ObjectReference `json:"templateRef,omitempty"`
}
// Port mapping (not a standalone object)
type PortalPortSpec struct {
    ExposedPort int `json:"exposedPort"`
    // Port names or protocol:port, where protocol could be UDP, TCP, HTTP, HTTPS
    TargetPort string `json:"targetPort"`
}
//IP and port remapper
type PortalSpec struct {
    // Optional list of port translations
    Ports []PortalPortSpec `json:"ports,omitempty"`
    // Optional port range (port or port-port) to forward, without translation
    PortRange string `json:"portRange,omitempty"`
    // Exposed addresses (AddressAllocation)
    ExposedAddresses *ObjectReference `json:"exposedAddresses,omitempty"`
    // Target addresses (Endpoints)
    TargetAddresses *ObjectReference `json:"targetAddresses,omitempty"`
    // If nil, it's expected that exposed addresses will be mapped 1-to-1 to target addresses
    LoadBalancer *LoadBalancerSpec `json:"loadBalancer,omitempty"`
}
//L3 LB (not a standalone object)
type LoadBalancerSpec struct {
    SessionAffinity AffinityType `json:"sessionAffinity,omitempty"`
//We should support plugins, however we specify that, for cloud-provider-specific arguments
}
//Address allocation
type AddressAllocationSpec struct {
    // Number of addresses to allocate
    Count int `json:"count"`
    // Pool from which to allocate. Kubernetes would provide some pre-defined pools, such as
    // "internal" and "linklocal"
    AddressPoolRef *ObjectReference `json:"addressPoolRef,omitempty"`
}
//Address pool: could be external or internal addresses
type ManualAddressPoolSpec struct {
    Addresses []string `json:"addresses"`
}
//Addresses allocated on demand by calling cloud provider
type CloudAddressPoolSpec struct {
//This should be a plugin, however we specify that, with cloud-provider-specific arguments
}
//"Backend" Endpoints generator
type EndpointsSpec struct {
    // Port names or protocol:port, where protocol could be UDP, TCP, HTTP, HTTPS
    Ports []string `json:"ports,omitempty"`
    // Selector to identify target pods
    Selector map[string]string `json:"selector,omitempty"`
    // Manually specified hosts
    Hosts []string `json:"endpoints,omitempty"`
    // Information for enlightened clients; copied into Endpoints
    Info map[string]string `json:"info,omitempty"`
}

Will explore alternatives next.

@bgrant0607
Copy link
Member Author

Low-hanging consolidation/simplifications:

  • DNSSpec could just go away in favor of populating DNS automatically for Portals.
  • AddressAllocationSpec could get pulled into PortalSpec

This would look like:

type PortalSpec struct {
    // Optional list of port translations
    Ports []PortalPortSpec `json:"ports,omitempty"`
    // Optional port range (port or port-port) to forward, without translation
    PortRange string `json:"portRange,omitempty"`
    // Number of addresses to allocate
    ExposedAddressCount int `json:"exposedAddressCount"`
    // Pool from which to allocate. Kubernetes would provide some pre-defined pools, such as
    // "internal" and "linklocal"
    ExposedAddressPool *ObjectReference `json:"exposedAddressPool,omitempty"`
    // Target addresses (Endpoints)
    TargetAddresses *ObjectReference `json:"targetAddresses,omitempty"`
    // If nil, it's expected that exposed addresses will be mapped 1-to-1 to target addresses
    LoadBalancer *LoadBalancerSpec `json:"loadBalancer,omitempty"`
}

@yugui
Copy link
Contributor

yugui commented Feb 20, 2015

What is the current status of this issue?

@thockin
Copy link
Member

thockin commented Feb 20, 2015

In progress.

On Thu, Feb 19, 2015 at 10:25 PM, Yuki Yugui Sonoda <
notifications@github.com> wrote:

What is the current status of this issue?

Reply to this email directly or view it on GitHub
#2585 (comment)
.

@thockin
Copy link
Member

thockin commented Feb 22, 2015

To revisit the previously posted slides about how the final endpoints struct is factored...

As I code it up, I find this factoring somewhat awkward. It's not a HUGE deal (it's just code) but what I realized is that the primary key I care about is the (service, port) tuple, not the (service, ip), which is what we have produced.

In effect I have to pivot all the data into a different struct to use it. A snippet:

        type hostPortPair struct {
                host string
                port int
        }

        // Update endpoints for services.
        for i := range allEndpoints {
                svcEndpoints := &allEndpoints[i]

                // We need to build a map of portname -> all ip:ports for that portname.
                portsToEndpoints := map[string][]hostPortPair{}

                // Explode the Endpoints.Endpoints[*].Ports[*] into the aforementioned map.
                for j := range svcEndpoints.Endpoints {
                        ep := &svcEndpoints.Endpoints[j]
                        for k := range ep.Ports {
                                epp := &ep.Ports[k]
                                portsToEndpoints[epp.Name] = append(portsToEndpoints[epp.Name], hostPortPair{ep.IP, epp.Port})
                                // Ignore the protocol field for now.
                        }
                }

                for portname := range portsToEndpoints {
                    // Finally I can process the (service, port) -> endpoints data.

I'm not sure this alone justifies revisting the proposed structure in #4370 but I wanted to put it out there - as the first consumer of my own work, I am not happy with it :)

@bgrant0607
Copy link
Member Author

I think there's no escaping that some use cases will want all ports for each IP, other use cases will want all endpoints for a given port name, and other use cases will expect the same port for all IPs.

Given that we expect the same ports for all IPs, would representing the ports and IPs separately be easier?

@thockin
Copy link
Member

thockin commented Feb 23, 2015

I don't think this is horrible, it's just a bit tedious. It's only a few
LoC, but "annoying". What is the case where someone cares about all the
different ports for a given Pod IP?

On Sun, Feb 22, 2015 at 10:23 PM, Brian Grant notifications@github.com
wrote:

I think there's no escaping that some use cases will want all ports for
each IP, other use cases will want all endpoints for a given port name, and
other use cases will expect the same port for all IPs.

Given that we expect the same ports for all IPs, would representing the
ports and IPs separately be easier?

Reply to this email directly or view it on GitHub
#2585 (comment)
.

@bgrant0607
Copy link
Member Author

Was discussed in person. Use cases:

  • Peers of distributed applications, which keep track of peer IPs and use static ports
  • Caching systems watching all endpoints, which otherwise need to synchronize updates to N lists
  • Monitoring systems, which otherwise need to figure out which endpoints lists to monitor and which ones not to monitor
  • Clients that need to get meta-info (API reflection, authentication) from a secondary port on the same IP

On a different topic, in #4440 I proposed using Endpoints for nodes and pods (not services) to replace ResourceLocation, which /proxy, /redirect, /bastion, etc. could be built on top of. This would suggest that we should change the service endpoints paths, to allow more flavors of endpoints.

@jdef
Copy link
Contributor

jdef commented Mar 11, 2015

@thockin
Copy link
Member

thockin commented Apr 1, 2015

multiple ports is in.

@bgrant0607 bgrant0607 removed this from the v1.0 milestone Apr 1, 2015
@bgrant0607 bgrant0607 added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Apr 1, 2015
@bgrant0607 bgrant0607 changed the title Service reorg for v1beta3 Service reorg ideas Apr 1, 2015
@bgrant0607
Copy link
Member Author

Removed from 1.0 and reduced priority. Will summarize later.

@countspongebob
Copy link

@thockin and @bgrant0607 ...

Trying to figure out the current state on "multiple ports", and am confused about what was removed from 1.0. Can you add clarification?

@bgrant0607
Copy link
Member Author

Multi-port services were completed: #6182, #5939. That was the only feature discussed in this issue required for 1.0, and none of the other ideas were implemented.

@bgrant0607
Copy link
Member Author

This issue is largely obsolete. Service/LB changes continue to be discussed in other issues, such as #561.

@dstroot
Copy link

dstroot commented Jul 15, 2016

Access external database from inside K8S

I have workers inside K8S that want to talk to an external database. I have created an external mapping to SQL server like so:

apiVersion: v1
kind: Service
metadata:
  name: database
spec:
  ports:
  - port: 1433
    targetPort: 1433
    protocol: TCP
---
# Because this service has no selector, the corresponding Endpoints
# object will not be created. You can manually map the service to
# your own specific endpoints:
kind: Endpoints
apiVersion: v1
metadata:
  name: database
subsets:
  - addresses:
      - ip: "23.99.34.75"
    ports:
      - port: 1433

DNS seems to work and "database" resolves to a 10 net address which I think points to the external address. BUT pods can't seem to successfully access/login.

Thoughts/suggestions?

@smarterclayton
Copy link
Contributor

smarterclayton commented Jul 16, 2016 via email

@dstroot
Copy link

dstroot commented Jul 16, 2016

@smarterclayton Yep! Thanks for sharing.

That's why I posted here - I also saw your other thread that was circling around this concept.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Indicates an issue on api area. area/usability priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

No branches or pull requests