Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write PodProxyWithPath & ServiceProxyWithPath test - + 12 endpoint coverage #95503

Merged
merged 4 commits into from
Jan 20, 2021

Conversation

riaankleinhans
Copy link
Contributor

@riaankleinhans riaankleinhans commented Oct 12, 2020

What type of PR is this?
/kind cleanup

What this PR does / why we need it:
This PR adds a test to test the following untested endpoints:
Pod:
connectCoreV1PutNamespacedPodProxyWithPath
connectCoreV1PostNamespacedPodProxyWithPath
connectCoreV1PatchNamespacedPodProxyWithPath
connectCoreV1OptionsNamespacedPodProxyWithPath
connectCoreV1HeadNamespacedPodProxyWithPath
connectCoreV1DeleteNamespacedPodProxyWithPath
Service:
connectCoreV1PutNamespacedServiceProxyWithPath
connectCoreV1PostNamespacedServiceProxyWithPath
connectCoreV1PatchNamespacedServiceProxyWithPath
connectCoreV1OptionsNamespacedServiceProxyWithPath
connectCoreV1HeadNamespacedServiceProxyWithPath
connectCoreV1DeleteNamespacedServiceProxyWithPath

Which issue(s) this PR fixes:
Fixes #95500

Testgrid Link:
Testgrid

Special notes for your reviewer:
Adds +12 endpoint test coverage (good for conformance)

Does this PR introduce a user-facing change?:

NONE

Release note:

NONE

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

NONE

/sig testing
/sig architecture
/area conformance

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. sig/testing Categorizes an issue or PR as relevant to SIG Testing. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. area/conformance Issues or PRs related to kubernetes conformance tests cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Oct 12, 2020
@k8s-ci-robot
Copy link
Contributor

@Riaankl: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-priority Indicates a PR lacks a `priority/foo` label and requires one. label Oct 12, 2020
@k8s-ci-robot k8s-ci-robot added the sig/network Categorizes an issue or PR as relevant to SIG Network. label Oct 12, 2020
@heyste
Copy link
Member

heyste commented Oct 13, 2020

  • pull-kubernetes-conformance-kind-ipv6-parallel #1315785848280584192
    Kubernetes e2e suite: [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]

/test pull-kubernetes-conformance-kind-ipv6-parallel

@heyste
Copy link
Member

heyste commented Oct 13, 2020

/test pull-kubernetes-e2e-kind-ipv6

@heyste
Copy link
Member

heyste commented Oct 13, 2020

pull-kubernetes-e2e-kind-ipv6 #1315806857507377152
Kubernetes e2e suite: [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]
/test pull-kubernetes-e2e-kind-ipv6

@riaankleinhans
Copy link
Contributor Author

/test

@k8s-ci-robot
Copy link
Contributor

@Riaankl: The /test command needs one or more targets.
The following commands are available to trigger jobs:

  • /test pull-kubernetes-bazel-build
  • /test pull-kubernetes-bazel-test
  • /test pull-kubernetes-conformance-image-test
  • /test pull-kubernetes-conformance-kind-ipv6-parallel
  • /test pull-kubernetes-dependencies
  • /test pull-kubernetes-dependencies-canary
  • /test pull-kubernetes-e2e-azure-dualstack
  • /test pull-kubernetes-e2e-iptables-azure-dualstack
  • /test pull-kubernetes-e2e-aws-eks-1-13-correctness
  • /test pull-kubernetes-files-remake
  • /test pull-kubernetes-e2e-gce
  • /test pull-kubernetes-e2e-gce-no-stage
  • /test pull-kubernetes-e2e-gce-kubetest2
  • /test pull-kubernetes-e2e-gce-canary
  • /test pull-kubernetes-e2e-gce-ubuntu
  • /test pull-kubernetes-e2e-gce-ubuntu-containerd
  • /test pull-kubernetes-e2e-gce-ubuntu-containerd-canary
  • /test pull-kubernetes-e2e-gce-rbe
  • /test pull-kubernetes-e2e-gce-alpha-features
  • /test pull-kubernetes-e2e-gce-device-plugin-gpu
  • /test pull-kubernetes-integration
  • /test pull-kubernetes-cross
  • /test pull-kubernetes-e2e-kind
  • /test pull-kubernetes-e2e-kind-canary
  • /test pull-kubernetes-e2e-kind-ipv6
  • /test pull-kubernetes-e2e-kind-ipv6-canary
  • /test pull-kubernetes-conformance-kind-ga-only
  • /test pull-kubernetes-conformance-kind-ga-only-parallel
  • /test pull-kubernetes-e2e-kops-aws
  • /test pull-kubernetes-bazel-build-canary
  • /test pull-kubernetes-bazel-test-canary
  • /test pull-kubernetes-bazel-test-integration-canary
  • /test pull-kubernetes-local-e2e
  • /test pull-publishing-bot-validate
  • /test pull-kubernetes-e2e-gce-network-proxy-http-connect
  • /test pull-kubernetes-e2e-gce-network-proxy-grpc
  • /test pull-kubernetes-e2e-gci-gce-autoscaling
  • /test pull-kubernetes-e2e-aks-engine-azure
  • /test pull-kubernetes-e2e-azure-disk
  • /test pull-kubernetes-e2e-azure-disk-vmss
  • /test pull-kubernetes-e2e-azure-file
  • /test pull-kubernetes-e2e-kind-dual-canary
  • /test pull-kubernetes-e2e-kind-ipvs-dual-canary
  • /test pull-kubernetes-e2e-gci-gce-ipvs
  • /test pull-kubernetes-node-e2e
  • /test pull-kubernetes-e2e-containerd-gce
  • /test pull-kubernetes-node-e2e-containerd
  • /test pull-kubernetes-node-e2e-alpha
  • /test pull-kubernetes-node-kubelet-serial-cpu-manager
  • /test pull-kubernetes-node-kubelet-serial-topology-manager
  • /test pull-kubernetes-node-kubelet-serial-hugepages
  • /test pull-kubernetes-node-crio-cgrpv2-e2e
  • /test pull-kubernetes-node-crio-e2e
  • /test pull-kubernetes-e2e-gce-100-performance
  • /test pull-kubernetes-e2e-gce-big-performance
  • /test pull-kubernetes-e2e-gce-correctness
  • /test pull-kubernetes-e2e-gce-large-performance
  • /test pull-kubernetes-kubemark-e2e-gce-big
  • /test pull-kubernetes-kubemark-e2e-gce-scale
  • /test pull-kubernetes-e2e-gce-storage-slow
  • /test pull-kubernetes-e2e-gce-storage-snapshot
  • /test pull-kubernetes-e2e-gce-storage-slow-rbe
  • /test pull-kubernetes-e2e-gce-csi-serial
  • /test pull-kubernetes-e2e-gce-iscsi
  • /test pull-kubernetes-e2e-gce-iscsi-serial
  • /test pull-kubernetes-e2e-gce-storage-disruptive
  • /test pull-kubernetes-e2e-aks-engine-azure-windows
  • /test pull-kubernetes-e2e-azure-disk-windows
  • /test pull-kubernetes-e2e-azure-file-windows
  • /test pull-kubernetes-typecheck
  • /test pull-kubernetes-verify
  • /test pull-kubernetes-e2e-windows-gce

Use /test all to run the following jobs:

  • pull-kubernetes-bazel-build
  • pull-kubernetes-bazel-test
  • pull-kubernetes-conformance-kind-ipv6-parallel
  • pull-kubernetes-dependencies
  • pull-kubernetes-e2e-gce-ubuntu-containerd
  • pull-kubernetes-integration
  • pull-kubernetes-e2e-kind
  • pull-kubernetes-e2e-kind-ipv6
  • pull-kubernetes-conformance-kind-ga-only-parallel
  • pull-kubernetes-node-e2e
  • pull-kubernetes-e2e-gce-100-performance
  • pull-kubernetes-typecheck
  • pull-kubernetes-verify

In response to this:

/test

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@riaankleinhans
Copy link
Contributor Author

/test all

@riaankleinhans
Copy link
Contributor Author

/assign @spiffxp @johnbelamaric
We have run the test jobs a few times and all seem ok.
Can we please get and /lgtm & /approve
https://prow.k8s.io/pr-history/?org=kubernetes&repo=kubernetes&pr=95503

@riaankleinhans
Copy link
Contributor Author

/test all

@riaankleinhans
Copy link
Contributor Author

/test pull-kubernetes-e2e-kind-ipv6

@heyste
Copy link
Member

heyste commented Oct 14, 2020

/sig-network

Can we get a lgtm for this PR - TIA

@aojea
Copy link
Member

aojea commented Oct 14, 2020

@cmluciano you were working on this area recently, do you have some time for reviewing it?

Copy link
Member

@oomichi oomichi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/cc @oomichi


// All methods for Pod ProxyWithPath return 200
httpVerbs := []string{"DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"}
for _, httpVerb := range httpVerbs {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test just touches API surfaces and checks the status codes are 200.
Is it fine to do that without checking actual API behaviors?
I mean POST API could create something and the corresponding test needs to check something is created in general. And the same checks should be implemented for the other APIs also to check API behaviors.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The goal of the test is to confirm that the proxy is passing valid requests through and returns a valid responses back for the list of endpoints below. When planning this test the feedback was to run porter from the standard agnhost test image.

                     endpoint                    |                          path                           |      kind
 ------------------------------------------------+---------------------------------------------------------+-----------------
  connectCoreV1PutNamespacedPodProxyWithPath     | /api/v1/namespaces/{namespace}/pods/{name}/proxy/{path} | PodProxyOptions
  connectCoreV1PostNamespacedPodProxyWithPath    | /api/v1/namespaces/{namespace}/pods/{name}/proxy/{path} | PodProxyOptions
  connectCoreV1PatchNamespacedPodProxyWithPath   | /api/v1/namespaces/{namespace}/pods/{name}/proxy/{path} | PodProxyOptions
  connectCoreV1OptionsNamespacedPodProxyWithPath | /api/v1/namespaces/{namespace}/pods/{name}/proxy/{path} | PodProxyOptions
  connectCoreV1HeadNamespacedPodProxyWithPath    | /api/v1/namespaces/{namespace}/pods/{name}/proxy/{path} | PodProxyOptions
  connectCoreV1DeleteNamespacedPodProxyWithPath  | /api/v1/namespaces/{namespace}/pods/{name}/proxy/{path} | PodProxyOptions
                      endpoint                      |                            path                             |        kind
----------------------------------------------------|-------------------------------------------------------------|---------------------
 connectCoreV1PutNamespacedServiceProxyWithPath     | /api/v1/namespaces/{namespace}/services/{name}/proxy/{path} | ServiceProxyOptions
 connectCoreV1PostNamespacedServiceProxyWithPath    | /api/v1/namespaces/{namespace}/services/{name}/proxy/{path} | ServiceProxyOptions
 connectCoreV1PatchNamespacedServiceProxyWithPath   | /api/v1/namespaces/{namespace}/services/{name}/proxy/{path} | ServiceProxyOptions
 connectCoreV1OptionsNamespacedServiceProxyWithPath | /api/v1/namespaces/{namespace}/services/{name}/proxy/{path} | ServiceProxyOptions
 connectCoreV1HeadNamespacedServiceProxyWithPath    | /api/v1/namespaces/{namespace}/services/{name}/proxy/{path} | ServiceProxyOptions
 connectCoreV1DeleteNamespacedServiceProxyWithPath  | /api/v1/namespaces/{namespace}/services/{name}/proxy/{path} | ServiceProxyOptions

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The goal of the test is to confirm that the proxy is passing valid requests through and returns a valid responses back for the list of endpoints below.

I am raising what is a valid response here.
Current test checks response code (HTTP200) only, doesn't the response body contain any values?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently porter will respond with foo to any request in the response body. Updating the test to check for that as well.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this verifying whether the correct verb is being received by porter? e.g. If I send a POST and it arrives at the container as a GET, that is incorrect.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the moment it isn't and I agree that shouldn't happen. Will consider how to update the test for this,

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did check the above scenario when I was checking the base proxy endpoints for pods. I was just reflecting the response method back to the client.

import (
	"fmt"
	"github.com/gorilla/mux"
	"log"
	"net/http"
)

func responseFunction(w http.ResponseWriter, r *http.Request) {
	fmt.Fprintf(w, "responseFunction: %v -> %v", r.Method, r.RequestURI)
	fmt.Printf("responseFunction: \n%v -> %v\n", r.Method, r.RequestURI)
}

func handleRequests() {
	r := mux.NewRouter()
	r.HandleFunc("/", responseFunction).Methods("GET", "POST", "PUT", "DELETE", "PATCH")
	log.Fatal(http.ListenAndServe(":80", r))
}

func main() {
	handleRequests()
}

Should porter be updated to support this requirement? or should the test use another method?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

During the conformance subproject meeting: @johnbelamaric suggested that we update the correct command (or add a new one, or add a flag) to agnhost image to support the above.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From conformance sub-project meeting today, @johnbelamaric suggested adding a flag option that would support this use case.

Current idea is that it would return a json payload that would include both the http method that porter received and the standard response for that port.

@k8s-ci-robot k8s-ci-robot requested a review from oomichi October 14, 2020 21:31
@oomichi
Copy link
Member

oomichi commented Jan 5, 2021

What is current status of this PR?

@riaankleinhans
Copy link
Contributor Author

@oomichi we are still waiting for https://prow.k8s.io/job-history/gs/kubernetes-jenkins/logs/post-kubernetes-push-e2e-test-images to run successfully.
It was discussed last year in the 15 December Conformance meeting and it was moved forward to just get 1.20 out the door.
It is again added to the 13 January agenda.

@heyste heyste force-pushed the pod-service-proxy-with-path branch from 8f6ff2e to 6c1f3b7 Compare January 18, 2021 00:03
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jan 18, 2021
@heyste
Copy link
Member

heyste commented Jan 18, 2021

/test pull-kubernetes-e2e-kind
flake: ?

INFO: Repository debian-iptables-amd64 instantiated at:
  no stack (--record_rule_instantiation_callstack not enabled)
Repository rule container_pull defined at:
  /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/io_bazel_rules_docker/container/pull.bzl:228:33: in <toplevel>
Analyzing: 4 targets (880 packages loaded, 9667 targets configured) 
ERROR: An error occurred during the fetch of repository 'debian-iptables-amd64':
   Pull command failed: 2021/01/18 00:05:41 Running the Image Puller to pull images from a Docker Registry...
2021/01/18 00:05:42 Image pull was unsuccessful: unable to save remote image k8s.gcr.io/build-image/debian-iptables@sha256:e30919918299988b318f0208e7fd264dee21a6be9d74bbd9f7fc15e78eade9b4: unable to write image layers: unable to write image layer: unable to write the contents of layer 0 to /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/debian-iptables-amd64/image/000.tar.gz: read tcp 10.34.155.28:42598->172.217.214.128:443: read: connection reset by peer
 (/bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/go_puller_linux/file/downloaded -directory /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/debian-iptables-amd64/image -os linux -os-version  -os-features  -architecture amd64 -variant  -features  -name k8s.gcr.io/build-image/debian-iptables@sha256:e30919918299988b318f0208e7fd264dee21a6be9d74bbd9f7fc15e78eade9b4)
ERROR: /home/prow/go/src/k8s.io/kubernetes/build/BUILD:62:22: //build:kube-proxy-internal depends on @debian-iptables-amd64//image:image in repository @debian-iptables-amd64 which failed to fetch. no such package '@debian-iptables-amd64//image': Pull command failed: 2021/01/18 00:05:41 Running the Image Puller to pull images from a Docker Registry... 

@heyste
Copy link
Member

heyste commented Jan 18, 2021

/test pull-kubernetes-e2e-gce-ubuntu-containerd
flake

Kubernetes e2e suite: [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]`
I0118 00:30:45.816] Jan 18 00:30:35.507: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
I0118 00:30:45.816] Jan 18 00:30:35.749: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7144" in namespace "provisioning-7144" to be "Succeeded or Failed"
I0118 00:30:45.816] Jan 18 00:30:35.858: INFO: Pod "hostpath-symlink-prep-provisioning-7144": Phase="Pending", Reason="", readiness=false. Elapsed: 108.227763ms
I0118 00:30:45.817] Jan 18 00:30:37.936: INFO: Pod "hostpath-symlink-prep-provisioning-7144": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186366582s
I0118 00:30:45.817] Jan 18 00:30:39.982: INFO: Pod "hostpath-symlink-prep-provisioning-7144": Phase="Failed", Reason="", readiness=false. Elapsed: 4.232468612s
I0118 00:30:45.817] Jan 18 00:30:39.982: FAIL: while waiting for hostPath init pod to succeed

@heyste
Copy link
Member

heyste commented Jan 18, 2021

/test pull-kubernetes-e2e-ubuntu-gce-network-policies
flake

Kubernetes e2e suite: [sig-network] Networking IPerf [Experimental] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients)
I0118 00:29:07.037] Jan 18 00:28:59.224: INFO: Selector matched 1 pods for map[app:iperf-e2e-cli-pod]
I0118 00:29:07.037] Jan 18 00:28:59.224: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
I0118 00:29:07.037] Jan 18 00:28:59.224: INFO: Running '/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.233.202.79 --kubeconfig=/workspace/.kube/config --namespace=network-perf-6431 logs iperf-e2e-cli-pod-0 iperf-client'
I0118 00:29:07.037] Jan 18 00:28:59.493: INFO: stderr: ""
I0118 00:29:07.037] Jan 18 00:28:59.493: INFO: stdout: "connect failed: Operation in progress\n"
I0118 00:29:07.038] Jan 18 00:29:01.493: FAIL: Unexpected error, Failed to find "0-", last result: "connect failed: Operation in progress
I0118 00:29:07.038] " when running forEach on the pods. 

@heyste
Copy link
Member

heyste commented Jan 18, 2021

/test pull-kubernetes-e2e-ubuntu-gce-network-policies
flake

Kubernetes e2e suite: [sig-network] Networking IPerf [Experimental] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients)
I0118 01:53:21.544] Jan 18 01:53:19.173: INFO: Service reachability failing with error: error running /go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.17.210 --kubeconfig=/workspace/.kube/config --namespace=services-6602 exec execpodvxxsv -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80:
I0118 01:53:21.544] Command stdout:
I0118 01:53:21.544] 
I0118 01:53:21.544] stderr:
I0118 01:53:21.544] + nc -zv -t -w 2 externalname-service 80
I0118 01:53:21.545] nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
I0118 01:53:21.545] command terminated with exit code 1
I0118 01:53:21.545] 
I0118 01:53:21.545] error:
I0118 01:53:21.545] exit status 1
I0118 01:53:21.546] Retrying...
I0118 01:53:57.368] Jan 18 01:53:44.421: INFO: Selector matched 1 pods for map[app:iperf-e2e-cli-pod]
I0118 01:53:57.369] Jan 18 01:53:44.421: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
I0118 01:53:57.369] Jan 18 01:53:44.421: INFO: Running '/go/src/k8s.io/kubernetes/kubernetes/platforms/linux/amd64/kubectl --server=https://35.197.17.210 --kubeconfig=/workspace/.kube/config --namespace=network-perf-6870 logs iperf-e2e-cli-pod-0 iperf-client'
I0118 01:53:57.369] Jan 18 01:53:44.755: INFO: stderr: ""
I0118 01:53:57.370] Jan 18 01:53:44.755: INFO: stdout: "connect failed: Connection refused\n"
I0118 01:53:57.370] Jan 18 01:53:46.756: FAIL: Unexpected error, Failed to find "0-", last result: "connect failed: Connection refused
I0118 01:53:57.370] " when running forEach on the pods.
I0118 01:53:57.370] 

@k8s-ci-robot
Copy link
Contributor

@Riaankl: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubernetes-e2e-ubuntu-gce-network-policies 6c1f3b7 link /test pull-kubernetes-e2e-ubuntu-gce-network-policies

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@aojea
Copy link
Member

aojea commented Jan 18, 2021

Kubernetes e2e suite: [sig-network] Networking IPerf [Experimental] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients)

this is a known issue , fix WIP in #94015 , is not a blocker for this PR

@heyste
Copy link
Member

heyste commented Jan 19, 2021

With a new agnhost image now available, the test now uses the porter --json-response command to verify that the verb sent (other than HEAD) to the proxy is the verb received by the pod or service.

@oomichi @spiffxp Does the test look okay for a lgtm / approve now? TIA

@oomichi
Copy link
Member

oomichi commented Jan 19, 2021

With a new agnhost image now available, the test now uses the porter --json-response command to verify that the verb sent (other than HEAD) to the proxy is the verb received by the pod or service.

@oomichi @spiffxp Does the test look okay for a lgtm / approve now? TIA

I am OK for this PR and I already put my /approve.
I'd like to see lgtm from the other reviewers.

@hh
Copy link
Member

hh commented Jan 20, 2021

So glad to see the image promotion process finally provide the required updated image. Great work @heyste on both the image and the test!

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jan 20, 2021
@riaankleinhans
Copy link
Contributor Author

/unhold

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/conformance Issues or PRs related to kubernetes conformance tests area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. release-note-none Denotes a PR that doesn't merit a release note. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Write PodProxyWithPath & ServiceProxyWithPath test+promote - + 12 endpoint coverage