Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cluster/eks: initial commit #71322

Closed
wants to merge 1 commit into from
Closed

Conversation

gyuho
Copy link
Member

@gyuho gyuho commented Nov 21, 2018

Add EKS e2e script.

What type of PR is this?

/kind feature

What this PR does / why we need it:

Kubernetes e2e tests require each provider to implement its own util script, and EKS does not have one (fix aws/aws-k8s-tester#12).

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):

Trying to fix aws/aws-k8s-tester#12.

Special notes for your reviewer:

@krzyzacy @BenTheElder

ref. kubernetes/test-infra#9814

Does this PR introduce a user-facing change?:

NONE

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. kind/feature Categorizes issue or PR as related to a new feature. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Nov 21, 2018
@k8s-ci-robot k8s-ci-robot added sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Nov 21, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: gyuho
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: mikedanese

If they are not already assigned, you can assign the PR to them by writing /assign @mikedanese in a comment when ready.

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Add EKS e2e script.

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
@gyuho
Copy link
Member Author

gyuho commented Nov 21, 2018

/cc @shyamjvs

@gyuho
Copy link
Member Author

gyuho commented Nov 21, 2018

To give more background on this PR, we've recently added EKS test plugin to kubetest, but haven't been able to get the kubectl to work. And find out it was calling cluster/kubectl.sh, which calls

source "${KUBE_ROOT}/cluster/kube-util.sh"

Which calls

PROVIDER_UTILS="${KUBE_ROOT}/cluster/${KUBERNETES_PROVIDER}/util.sh"

I hope having this script helps us debug the kubectl failures.

Thanks!

EDIT: This is the error message that I was getting:

W1121 19:22:20.855] 2018/11/21 19:22:20 process.go:155: Step '/tmp/aws-k8s-tester767933017 eks —path=/tmp/aws-k8s-tester510470242 check cluster' finished in 6.625144142s
W1121 19:22:20.904] 2018/11/21 19:22:20 process.go:153: Running: ./cluster/kubectl.sh —match-server-version=false version
W1121 19:22:21.341] The connection to the server localhost:8080 was refused - did you specify the right host or port?
W1121 19:22:21.345] 2018/11/21 19:22:21 process.go:155: Step './cluster/kubectl.sh —match-server-version=false version' finished in 440.928862ms
W1121 19:22:21.345] 2018/11/21 19:22:21 e2e.go:341: Failed to reach api. Sleeping for 10 seconds before retrying…
W1121 19:22:31.345] 2018/11/21 19:22:31 process.go:153: Running: ./cluster/kubectl.sh —match-server-version=false version
W1121 19:22:31.438] The connection to the server localhost:8080 was refused - did you specify the right host or port?
W1121 19:22:31.441] 2018/11/21 19:22:31 process.go:155: Step './cluster/kubectl.sh —match-server-version=false version' finished in 96.048207ms
W1121 19:22:31.441] 2018/11/21 19:22:31 e2e.go:341: Failed to reach api. Sleeping for 10 seconds before retrying…
W1121 19:22:41.442] 2018/11/21 19:22:41 process.go:153: Running: ./cluster/kubectl.sh —match-server-version=false version
W1121 19:22:41.539] The connection to the server localhost:8080 was refused - did you specify the right host or port?
W1121 19:22:41.542] 2018/11/21 19:22:41 process.go:155: Step './cluster/kubectl.sh —match-server-version=false version' finished in 100.389731ms
W1121 19:22:41.542] 2018/11/21 19:22:41 e2e.go:341: Failed to reach api. Sleeping for 10 seconds before retrying…
W1121 19:22:51.542] 2018/11/21 19:22:51 process.go:153: Running: ./cluster/kubectl.sh —match-server-version=false version
W1121 19:22:51.638] The connection to the server localhost:8080 was refused - did you specify the right host or port?
W1121 19:22:51.640] 2018/11/21 19:22:51 process.go:155: Step './cluster/kubectl.sh —match-server-version=false version' finished in 97.6761ms
W1121 19:22:51.640] 2018/11/21 19:22:51 e2e.go:341: Failed to reach api. Sleeping for 10 seconds before retrying…

@mikedanese
Copy link
Member

mikedanese commented Nov 21, 2018

Can kubetest call kubectl directly? This seems like a pretty round about way of fixing the stated issue. Note that cluster/kubectl.sh has been deprecated for 4 years #9342 .

Please also see the deprecation notice in https://github.com/kubernetes/kubernetes/blob/master/cluster/README.md and #49213.

How does this work for GKE?

@krzyzacy
Copy link
Member

ouch
/shrug

by default kubectl is from the image (which is from gcloud sdk), and cluster/kubectl.sh is from the extracted k8s tarball and we were not mucking with PATH :-\

We probably can change how skew tests work, cc @pwittrock @mengqiy @fejta on thoughts

@k8s-ci-robot k8s-ci-robot added the ¯\_(ツ)_/¯ ¯\\\_(ツ)_/¯ label Nov 21, 2018
@k8s-ci-robot
Copy link
Contributor

@gyuho: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-integration 7be6ea2 link /test pull-kubernetes-integration
pull-kubernetes-kubemark-e2e-gce-big 7be6ea2 link /test pull-kubernetes-kubemark-e2e-gce-big
pull-kubernetes-e2e-gce-100-performance 7be6ea2 link /test pull-kubernetes-e2e-gce-100-performance

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@gyuho
Copy link
Member Author

gyuho commented Nov 21, 2018

@mikedanese @krzyzacy

Can kubetest call kubectl directly?

That would make things easier for us as provider plugin maintainers.

cluster/kubectl.sh is from the extracted k8s tarball and we were not mucking with PATH

Yeah, I would like to specify the kubectl path and all the flags (e.g. --kubeconfig). Seems like setting KUBECTL_PATH inside test plugin does not have any effects.

@gyuho
Copy link
Member Author

gyuho commented Nov 21, 2018

Found out kubetest is setting KUBECTL=./cluster/kubectl.sh KUBE_CONFIG_FILE=config-test.sh.

I tried to overwrite this envs inside our plugin, but still does not work

W1121 23:22:52.161] {"level":"info","ts":"2018-11-21T23:22:52.158Z","caller":"eks/cluster.go:91","msg":"set KUBE_* environmental variables for kubetest","envs":["AWS_SHARED_CREDENTIALS_FILE=/etc/eks-aws-credentials/eks-aws-credentials","GOPATH=/go","HOSTNAME=ce45d7ed-ede1-11e8-a13d-0a580a6c03f5","DOCKER_IN_DOCKER_ENABLED=false","BAZEL_REMOTE_CACHE_ENABLED=false","E2E_GOOGLE_APPLICATION_CREDENTIALS=/etc/service-account/service-account.json","BOSKOS_SERVICE_HOST=10.63.250.132","JOB_TYPE=periodic","BOSKOS_METRICS_SERVICE_PORT_DEFAULT=80","BOOTSTRAP_MIGRATION=yes","CLOUDSDK_CONFIG=/workspace/.config/gcloud","AWS_K8S_TESTER_EKS_KUBECTL_DOWNLOAD_URL=https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/kubectl","KUBERNETES_SERVICE_HOST=10.63.240.1","KUBE_GCE_INSTANCE_PREFIX=bootstrap-e2e","BOSKOS_METRICS_PORT_80_TCP_PORT=80","AWS_K8S_TESTER_EKS_LOG_DEBUG=false","WORKSPACE=/workspace","HOME=/workspace","BOSKOS_SERVICE_PORT=80","PATH=/go/bin:/go/bin:/usr/local/go/bin:/google-cloud-sdk/bin:/workspace:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","AWS_K8S_TESTER_EKS_UPLOAD_WORKER_NODE_LOGS=false","BOSKOS_PORT_80_TCP_PROTO=tcp","AWS_K8S_TESTER_EKS_WORKER_NODE_ASG_MAX=1","BOSKOS_PORT=tcp://10.63.250.132:80","AWS_K8S_TESTER_EKS_LOG_ACCESS=false","AWS_K8S_TESTER_EKS_AWS_K8S_TESTER_DOWNLOAD_URL=https://github.com/aws/aws-k8s-tester/releases/download/0.1.3/aws-k8s-tester-0.1.3-linux-amd64","TERM=xterm","AWS_K8S_TESTER_EKS_UPLOAD_TESTER_LOGS=false","CLOUDSDK_CORE_DISABLE_PROMPTS=1","BOSKOS_SERVICE_PORT_DEFAULT=80","AWS_K8S_TESTER_EKS_ENABLE_NODE_SSH=true","PROW_JOB_ID=ce45d7ed-ede1-11e8-a13d-0a580a6c03f5","AWS_K8S_TESTER_EKS_WAIT_BEFORE_DOWN=1m0s","BOSKOS_METRICS_PORT_80_TCP_PROTO=tcp","KUBERNETES_SERVICE_PORT=443","KUBETEST_IN_DOCKER=true","ARTIFACTS=/workspace/artifacts","BOSKOS_PORT_80_TCP=tcp://10.63.250.132:80","AWS_K8S_TESTER_EKS_WORKER_NODE_INSTANCE_TYPE=m3.xlarge","GCS_ARTIFACTS_DIR=gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-latest-aws-eks-1-10-prod/518/artifacts","POD_NAME=ce45d7ed-ede1-11e8-a13d-0a580a6c03f5","KUBERNETES_SERVICE_PORT_HTTPS=443","AWS_K8S_TESTER_EKS_DOWN=true","AWS_K8S_TESTER_EKS_ALB_ENABLE=false","BOSKOS_PORT_80_TCP_PORT=80","AWS_K8S_TESTER_EKS_ENABLE_WORKER_NODE_HA=true","CLOUDSDK_EXPERIMENTAL_FAST_COMPONENT_UPDATE=false","BUILD_ID=518","AWS_K8S_TESTER_EKS_WORKER_NODE_AMI=ami-0f54a2f7d2e9c88b3","BOSKOS_METRICS_PORT_80_TCP_ADDR=10.63.249.148","JOB_NAME=ci-kubernetes-e2e-latest-aws-eks-1-10-prod","JOB_SPEC={"type":"periodic","job":"ci-kubernetes-e2e-latest-aws-eks-1-10-prod","buildid":"518","prowjobid":"ce45d7ed-ede1-11e8-a13d-0a580a6c03f5","refs":{}}","KUBETEST_MANUAL_DUMP=y","JENKINS_AWS_SSH_PRIVATE_KEY_FILE=/root/.ssh/kube_aws_rsa","CLOUDSDK_COMPONENT_MANAGER_DISABLE_UPDATE_CHECK=true","SHLVL=2","INSTANCE_PREFIX=bootstrap-e2e","AWS_K8S_TESTER_EKS_TEST_MODE=embedded","KUBE_AWS_INSTANCE_PREFIX=bootstrap-e2e","AWS_K8S_TESTER_EKS_KUBERNETES_VERSION=1.10","AWS_K8S_TESTER_EKS_AWS_IAM_AUTHENTICATOR_DOWNLOAD_URL=https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator","KUBERNETES_PORT_443_TCP_ADDR=10.63.240.1","KUBERNETES_PORT=tcp://10.63.240.1:443","KUBERNETES_PORT_443_TCP_PORT=443","JENKINS_GCE_SSH_PUBLIC_KEY_FILE=/root/.ssh/google_compute_engine.pub","=./test-infra/jenkins/bootstrap.py","BUILD_NUMBER=518","KUBERNETES_PORT_443_TCP_PROTO=tcp","BOSKOS_METRICS_SERVICE_PORT=80","GOOGLE_APPLICATION_CREDENTIALS=/etc/service-account/service-account.json","BOSKOS_METRICS_PORT_80_TCP=tcp://10.63.249.148:80","BOSKOS_METRICS_PORT=tcp://10.63.249.148:80","KUBERNETES_PORT_443_TCP=tcp://10.63.240.1:443","JENKINS_AWS_SSH_PUBLIC_KEY_FILE=/root/.ssh/kube_aws_rsa.pub","NODE_NAME=gke-prow-default-pool-3c8994a8-6b7w","AWS_K8S_TESTER_EKS_WORKER_NODE_ASG_MIN=1","PWD=/workspace","GO_TARBALL=go1.11.1.linux-amd64.tar.gz","JENKINS_GCE_SSH_PRIVATE_KEY_FILE=/root/.ssh/google_compute_engine","BOSKOS_PORT_80_TCP_ADDR=10.63.250.132","BOSKOS_METRICS_SERVICE_HOST=10.63.249.148","BAZEL_VERSION=0.18.0","IMAGE=gcr.io/k8s-testimages/kubekins-e2e:v20181120-dea0825e3-master","KUBERNETES_PROVIDER=eks","CLUSTER_NAME=bootstrap-e2e","GINKGO_PARALLEL_NODES=30","KUBERNETES_RELEASE_URL=https://storage.googleapis.com/kubernetes-release-dev/ci","KUBERNETES_RELEASE=v1.14.0-alpha.0.539+2b0212de9cdf4c","KUBERNETES_SKIP_CONFIRM=y","KUBERNETES_SKIP_CREATE_CLUSTER=y","KUBERNETES_DOWNLOAD_TESTS=y","CLUSTER_API_VERSION=1.14.0-alpha.0.539+2b0212de9cdf4c","KUBECTL=/tmp/kubectl269713586","KUBE_CONFIG_FILE=/tmp/aws-k8s-tester032129158.awsk8stester-eks-20181121-512xllz.kubeconfig.generated.yaml","KUBE_RUNTIME_CONFIG=batch/v2alpha1=true","KUBE_MASTER_URL=https://901722CFC7BCC55CB1EF0DDAF1A5F274.yl4.us-west-2.eks.amazonaws.com","KUBECONFIG=/tmp/aws-k8s-tester032129158.awsk8stester-eks-20181121-512xllz.kubeconfig.generated.yaml"]}

W1121 23:22:52.162] {"level":"info","ts":"2018-11-21T23:22:52.159Z","caller":"eks/tester_embedded.go:301","msg":"overwrote KUBECONFIG from an existing cluster","KUBECONFIG":"/tmp/aws-k8s-tester032129158.awsk8stester-eks-20181121-512xllz.kubeconfig.generated.yaml","cluster-name":"awsk8stester-eks-20181121-512xllz"}
W1121 23:22:53.111] {"level":"info","ts":"2018-11-21T23:22:53.110Z","caller":"eks/tester_embedded.go:313","msg":"checking kubectl after cluster creation","kubectl":"/tmp/kubectl269713586","kubectl-download-url":"https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/kubectl","kubectl-version":"Client Version: version.Info{Major:\"1\", Minor:\"10\", GitVersion:\"v1.10.3\", GitCommit:\"2bba0127d85d5a46ab4b778548be28623b32d0b0\", GitTreeState:\"clean\", BuildDate:\"2018-07-26T20:40:11Z\", GoVersion:\"go1.9.3\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"10+\", GitVersion:\"v1.10.3-eks\", GitCommit:\"58c199a59046dbf0a13a387d3491a39213be53df\", GitTreeState:\"clean\", BuildDate:\"2018-09-21T21:00:04Z\", GoVersion:\"go1.9.3\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n","kubectl-version-err":"<nil>"}
W1121 23:22:53.111] {"level":"info","ts":"2018-11-21T23:22:53.110Z","caller":"eks/tester_embedded.go:323","msg":"created EKS deployer","cluster-name":"awsk8stester-eks-20181121-512xllz","aws-k8s-tester-eksconfig-path":"/tmp/aws-k8s-tester032129158","request-started":"4 seconds ago"}
W1121 23:22:53.232] 2018/11/21 23:22:53 process.go:155: Step '/tmp/aws-k8s-tester076388909 eks --path=/tmp/aws-k8s-tester032129158 check cluster' finished in 4.80983124s
W1121 23:22:53.234] 2018/11/21 23:22:53 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
W1121 23:22:53.559] The connection to the server localhost:8080 was refused - did you specify the right host or port?
W1121 23:22:53.563] 2018/11/21 23:22:53 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 329.42226ms
W1121 23:22:53.564] 2018/11/21 23:22:53 e2e.go:341: Failed to reach api. Sleeping for 10 seconds before retrying...
W1121 23:23:03.564] 2018/11/21 23:23:03 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
W1121 23:23:03.644] The connection to the server localhost:8080 was refused - did you specify the right host or port?
W1121 23:23:03.647] 2018/11/21 23:23:03 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 83.386856ms
W1121 23:23:03.647] 2018/11/21 23:23:03 e2e.go:341: Failed to reach api. Sleeping for 10 seconds before retrying...

...

W1121 23:23:33.905] 2018/11/21 23:23:33 process.go:153: Running: ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true
W1121 23:23:33.971] Skeleton Provider: prepare-e2e not implemented
W1121 23:23:33.971] KUBE_MASTER_IP: 
W1121 23:23:33.971] KUBE_MASTER: 
I1121 23:23:34.072] Setting up for KUBERNETES_PROVIDER="eks".
I1121 23:23:41.098] Running Suite: Kubernetes e2e suite
I1121 23:23:41.098] ===================================
I1121 23:23:41.098] Random Seed: �[1m1542842614�[0m - Will randomize all specs
I1121 23:23:41.098] Will run �[1m1919�[0m specs

It would be great if there's a way to pass these env vars to cluster/kubectl.sh in the meantime.

"KUBECTL=/tmp/kubectl269713586","KUBE_CONFIG_FILE=/tmp/aws-k8s-tester032129158.awsk8stester-eks-20181121-512xllz.kubeconfig.generated.yaml","KUBE_RUNTIME_CONFIG=batch/v2alpha1=true","KUBE_MASTER_URL=https://901722CFC7BCC55CB1EF0DDAF1A5F274.yl4.us-west-2.eks.amazonaws.com","KUBECONFIG=/tmp/aws-k8s-tester032129158.awsk8stester-eks-20181121-512xllz.kubeconfig.generated.yaml"]

@gyuho
Copy link
Member Author

gyuho commented Nov 21, 2018

Sorry for spamming. There are so many unknowns for a new kubetest plugin provider.

I1121 23:23:41.131] �[91m�[1mFailure [0.000 seconds]�[0m
I1121 23:23:41.131] �[91m�[1m[BeforeSuite] BeforeSuite �[0m
I1121 23:23:41.131] �[37m_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:67�[0m
I1121 23:23:41.131] 
I1121 23:23:41.132]   �[91mNode 1 disappeared before completing BeforeSuite�[0m
I1121 23:23:41.132] 
I1121 23:23:41.132]   _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:67
I1121 23:23:41.132] �[90m------------------------------�[0m
I1121 23:23:41.132] Nov 21 23:23:41.098: INFO: Running AfterSuite actions on all nodes
I1121 23:23:41.132] 
I1121 23:23:42.107] 
I1121 23:23:42.108] 	 -------------------------------------------------------------------
I1121 23:23:42.108] 	|                                                                   |
I1121 23:23:42.108] 	|  Ginkgo timed out waiting for all parallel nodes to report back!  |
I1121 23:23:42.108] 	|                                                                   |
I1121 23:23:42.108] 	 -------------------------------------------------------------------
I1121 23:23:42.108] 
I1121 23:23:42.109] [1] W1121 23:23:40.823290    1313 test_context.go:385] Unknown provider "eks", proceeding as for --provider=skeleton.
I1121 23:23:42.109] [1] I1121 23:23:40.823472    1313 e2e.go:224] Starting e2e run "77a6749e-ede4-11e8-9c49-0a580a3c191f" on Ginkgo node 1
I1121 23:23:42.109] [2] W1121 23:23:40.651025    1308 test_context.go:385] Unknown provider "eks", proceeding as for --provider=skeleton.
I1121 23:23:42.109] [2] I1121 23:23:40.651217    1308 e2e.go:224] Starting e2e run "778e3684-ede4-11e8-bbfc-0a580a3c191f" on Ginkgo node 2

Error from

// Make sure that all test runs have a valid TestContext.CloudConfig.Provider.
var err error
TestContext.CloudConfig.Provider, err = SetupProviderConfig(TestContext.Provider)
if err == nil {
return
}
if !os.IsNotExist(errors.Cause(err)) {
Failf("Failed to setup provider config: %v", err)
}
// We allow unknown provider parameters for historic reasons. At least log a
// warning to catch typos.
// TODO (https://github.com/kubernetes/kubernetes/issues/70200):
// - remove the fallback for unknown providers
// - proper error message instead of Failf (which panics)
klog.Warningf("Unknown provider %q, proceeding as for --provider=skeleton.", TestContext.Provider)

That requires

// ProviderInterface contains the implementation for certain
// provider-specific functionality.
type ProviderInterface interface {
FrameworkBeforeEach(f *Framework)
FrameworkAfterEach(f *Framework)
ResizeGroup(group string, size int32) error
GetGroupNodes(group string) ([]string, error)
GroupSize(group string) (int, error)
CreatePD(zone string) (string, error)
DeletePD(pdName string) error
CreatePVSource(zone, diskName string) (*v1.PersistentVolumeSource, error)
DeletePVSource(pvSource *v1.PersistentVolumeSource) error
CleanupServiceResources(c clientset.Interface, loadBalancerName, region, zone string)
EnsureLoadBalancerResourcesDeleted(ip, portRange string) error
LoadBalancerSrcRanges() []string
EnableAndDisableInternalLB() (enable, disable func(svc *v1.Service))
}

Do we still require ProviderInterface for Kubernetes/Kubernetes e2e tests? I was expecting that as long as we implement kubetest deployer interface, everything would just work. Do we need implement this? If yes, is there any reference implementation that I can follow?

Thanks.

@BenTheElder
Copy link
Member

I agree with @mikedanese that we should really stop using and adding to cluster/ :/ we should fix kubetest instead.

ProviderInterface is for storage mainly IIRC, and actually something recent-ish? cc @pohly
For say, conformance testing we do not need this. Some storage tests might, but many of those also do something like skip the test if not on GCP... 😬

@gyuho
Copy link
Member Author

gyuho commented Nov 21, 2018

Yeah ProviderInterface has some implementations in https://github.com/kubernetes/kubernetes/tree/master/test/e2e/framework/providers, but I might wait until cloud provider moves out-of-tree. Running conformance tests should be good enough for us now.

stop using and adding to cluster/ :/ we should fix kubetest instead.

Sounds good.

Please let me know if you need any help! Looks like they are hardcoded in https://github.com/kubernetes/test-infra/blob/master/kubetest/e2e.go#L52.

@BenTheElder
Copy link
Member

Help would definitely be appreciated! I'm still trying to help review work here but not really working on kubetest these days, mostly kind. We definitely want to get this working nicely with out of tree and without the cluster/* scripts as much as possible ...

OTOH, @krzyzacy and I have some rough plans for a kubetest rewrite that would avoid these from the start, we might get to that in the spring (?) ...

@gyuho
Copy link
Member Author

gyuho commented Nov 22, 2018

@BenTheElder Yes, I will create a proposing PR to allow provider to pass kubectl paths, in kubetest package. I think we can make changes in kubectl/main.go/run. Thanks!

@gyuho
Copy link
Member Author

gyuho commented Nov 22, 2018

/hold

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 22, 2018
@gyuho
Copy link
Member Author

gyuho commented Nov 30, 2018

We've added KubectlCommand interface to kubetest/deployer interface, where each provider specifies its own kubectl command arguments (kubernetes/test-infra#10216). Will revisit if we still have issues. Thanks all!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. kind/feature Categorizes issue or PR as related to a new feature. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. release-note-none Denotes a PR that doesn't merit a release note. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. ¯\_(ツ)_/¯ ¯\\\_(ツ)_/¯
Projects
None yet
Development

Successfully merging this pull request may close these issues.

kubernetes/kubernetes.cluster/kubectl.sh fails
5 participants