[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make it possible to reclaim a managed resource #88

Open
muvaf opened this issue Nov 20, 2019 · 4 comments
Open

Make it possible to reclaim a managed resource #88

muvaf opened this issue Nov 20, 2019 · 4 comments
Assignees
Labels
enhancement New feature or request wontfix This will not be worked on

Comments

@muvaf
Copy link
Member
muvaf commented Nov 20, 2019

As of #87 this is not possible anymore. For the full context it might help to look at the discussions on that PR.

What problem are you facing?

Currently, we don't support rescheduling a Released managed resource to a new claim. This is inline with Kubernetes PVC/PV model, however, not all managed resources can be considered as stateful as volumes. So, users might want to re-use the same managed resource.

Quoting from #87 (comment)

Persistent volumes support a deprecated Recycle policy that attempts to clean up a previously claimed PV and let others claim it. I'm guessing this was deprecated because it's difficult to securely erase volumes with arbitrary underlying storage technologies, and because there's just not that much point recycling a volume when you could instead treat it as cattle; delete it and dynamically provision an identical new one. I suspect these are both as or more true for our managed resources.

I can see the point in not supporting Recycle for volumes because volume is a completely stateful entity and it's probably useless without a cleanup. But this doesn't apply to all managed resources we might support. There are some stateless resources like network, logical resource groups or high level pipeline services. To me, volume looks like one end of the spectrum and logical groupings on the other end. Database server is somewhat in-between since there could be separate apps use the same database server with different schemas. Another example could be that a user might want to provision a giant k8s cluster on top of reserved instances(less costly) and let it be reused by different teams. An example from outside of k8s world could be Amazon EMR clusters where you reserve instances for cost reasons but different people submit different jobs that are completely independent. My point is our model should be able to cover stateless cloud resources as well as stateful ones. As long as teams and/or apps are aware that the resource is recycled, it looks OK to me.

How could Crossplane help solve your problem?

Introduce a new reclaim policy option Recycle but do not go into business of cleaning up the managed resource. Document that this just makes the managed resource available to reclaim and let the user decides whether they want it or not.

@muvaf muvaf added the enhancement New feature or request label Nov 20, 2019
@muvaf
Copy link
Member Author
muvaf commented Nov 20, 2019

cc @prasek @negz @hasheddan

@muvaf
Copy link
Member Author
muvaf commented Feb 4, 2020

See crossplane-contrib/provider-gcp#157 . I think it's a valid use case for keeping the database while apps come and go, which is something reclaimPolicy: Recycle can enable.

@negz
Copy link
Member
negz commented Feb 12, 2020

I still feel strongly that this is not a wise path. I posted a related comment over on crossplane-contrib/provider-gcp#157 (comment).

I do think we should:

  • Ensure resource claims provide clear feedback that binding to a released resource is not allowed.
  • Do what we can to streamline the process of manually recycling a resource, i.e. deleting it from Crossplane, optionally cleaning it up, and reattaching it.

@stale
Copy link
stale bot commented Aug 13, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix This will not be worked on label Aug 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

2 participants