Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PersistentVolumeClaimBinder #6105

Merged
merged 12 commits into from
Apr 27, 2015
Merged

Conversation

markturansky
Copy link
Contributor

Following #6002 (client).
From #5105.

@googlebot
Copy link

Thanks for your pull request.

It looks like this may be your first contribution to a Google open source project, in which case you'll need to sign a Contributor License Agreement (CLA) at https://cla.developers.google.com/.

If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address. Check the information on your CLA or see this help article on setting the email on your git commits.

Once you've done that, please reply here to let us know. If you signed the CLA as a corporation, please let us know the company's name.

@markturansky
Copy link
Contributor Author

@derekwaynecarr you can ignore all these client commits on this PR. They come by "soft resetting" the client branch to my manager branch. They will drop off once the client is merged.

This manager implementation can use some cleanup after the client is merged and before I ask for a review.

@markturansky
Copy link
Contributor Author

@thockin This is ready for review.

The two big additions are:

  • PersistentVolumeManager is a simple "sync the world" loop like every other controller. I would like to refactor this into @lavalamp's DeltaFIFO queue as a follow-on task.
  • custom cache.Store/Index in pkg/volumemanager/types.go that indexes PVs by access modes and keeps them sorted for easy matching.

@@ -190,6 +194,9 @@ func (s *CMServer) Run(_ []string) error {
namespaceManager := namespace.NewNamespaceManager(kubeClient)
namespaceManager.Run(s.NamespaceSyncPeriod)

persistentVolumeManager := volumemanager.NewPersistentVolumeManager(kubeClient)
persistentVolumeManager.Run(s.NamespaceSyncPeriod)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wrong period variable.

@markturansky
Copy link
Contributor Author

Rebased again.

@thockin PTAL?

@thockin
Copy link
Member

thockin commented Apr 27, 2015

On it

@thockin
Copy link
Member

thockin commented Apr 27, 2015

I'm going to do a review of this as-is, but there's sort of a monster problem: This should use a "binding" sub-resource, as with pods. We've drawn the analogy of claims and PVs being like pods and nodes - we should follow the same pattern. That seems like a significant change, so I am willing to proceed on this path as long as we have a strong promise to fix it ASAP. This is, strictly speaking, an API change (albeit constrained to ~1 consumer - you).

As per our IRC chat, there are a lot of things this might change:

  • Do we bind the Claim to the PV or vice-versa?
  • Do we move the Claim.VolumeRef to Spec?
  • Do we allow users to ask for a specific PV?

@@ -201,6 +201,14 @@ func FuzzerFor(t *testing.T, version string, src rand.Source) *fuzz.Fuzzer {
c.FuzzNoCustom(s) // fuzz self without calling this function again
s.Type = api.SecretTypeOpaque
},
func(pv *api.PersistentVolume, c fuzz.Continue) {
c.FuzzNoCustom(pv) // fuzz self without calling this function again
pv.Status.Phase = api.VolumePending
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are not fuzzing. You should pick a random number [0-num_phases) and index into a table of phase strings.

@thockin
Copy link
Member

thockin commented Apr 27, 2015

Looks good overall. Let's have the discussion about sub-resource. I think that doing it right means flipping the current status/spec arrangement and that is a tedious and breaking API change.

As per IRC - maybe we can flag gate this entirely? get this committed and off-the-books, let your folks proceed with testing, but force anyone who wants to use it to say --enable_alpha_persistent_volumes_support or something

@markturansky
Copy link
Contributor Author

Immediate feedback addressed and added flag to optionally start the binder in controller-manager.

Regarding the proposed change for binding, I will create a new PR as a follow-on task.

if volume.Spec.ClaimRef == nil {
return fmt.Errorf("PersistentVolume[%s] expected to be bound but found nil claimRef: %+v", volume)
} else {
// TODO: implement Recycle method on plugins
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to be clear, at this point the admin must remove and recreate the PV object or set Phase manually, yes? There's no other way to revive it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the current reclamation process is manual.

@thockin
Copy link
Member

thockin commented Apr 27, 2015

LGTM - do you want to fix the last nit or not worthwhile?

@markturansky
Copy link
Contributor Author

@thockin i'm good as-is and will start looking to the Binding subresource.

thockin added a commit that referenced this pull request Apr 27, 2015
@thockin thockin merged commit 635c393 into kubernetes:master Apr 27, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants