You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As of #87 this is not possible anymore. For the full context it might help to look at the discussions on that PR.
What problem are you facing?
Currently, we don't support rescheduling a Released managed resource to a new claim. This is inline with Kubernetes PVC/PV model, however, not all managed resources can be considered as stateful as volumes. So, users might want to re-use the same managed resource.
Persistent volumes support a deprecated Recycle policy that attempts to clean up a previously claimed PV and let others claim it. I'm guessing this was deprecated because it's difficult to securely erase volumes with arbitrary underlying storage technologies, and because there's just not that much point recycling a volume when you could instead treat it as cattle; delete it and dynamically provision an identical new one. I suspect these are both as or more true for our managed resources.
I can see the point in not supporting Recycle for volumes because volume is a completely stateful entity and it's probably useless without a cleanup. But this doesn't apply to all managed resources we might support. There are some stateless resources like network, logical resource groups or high level pipeline services. To me, volume looks like one end of the spectrum and logical groupings on the other end. Database server is somewhat in-between since there could be separate apps use the same database server with different schemas. Another example could be that a user might want to provision a giant k8s cluster on top of reserved instances(less costly) and let it be reused by different teams. An example from outside of k8s world could be Amazon EMR clusters where you reserve instances for cost reasons but different people submit different jobs that are completely independent. My point is our model should be able to cover stateless cloud resources as well as stateful ones. As long as teams and/or apps are aware that the resource is recycled, it looks OK to me.
How could Crossplane help solve your problem?
Introduce a new reclaim policy option Recycle but do not go into business of cleaning up the managed resource. Document that this just makes the managed resource available to reclaim and let the user decides whether they want it or not.
The text was updated successfully, but these errors were encountered:
See crossplane-contrib/provider-gcp#157 . I think it's a valid use case for keeping the database while apps come and go, which is something reclaimPolicy: Recycle can enable.
Ensure resource claims provide clear feedback that binding to a released resource is not allowed.
Do what we can to streamline the process of manually recycling a resource, i.e. deleting it from Crossplane, optionally cleaning it up, and reattaching it.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
As of #87 this is not possible anymore. For the full context it might help to look at the discussions on that PR.
What problem are you facing?
Currently, we don't support rescheduling a
Released
managed resource to a new claim. This is inline with Kubernetes PVC/PV model, however, not all managed resources can be considered as stateful as volumes. So, users might want to re-use the same managed resource.Quoting from #87 (comment)
I can see the point in not supporting
Recycle
for volumes because volume is a completely stateful entity and it's probably useless without a cleanup. But this doesn't apply to all managed resources we might support. There are some stateless resources like network, logical resource groups or high level pipeline services. To me, volume looks like one end of the spectrum and logical groupings on the other end. Database server is somewhat in-between since there could be separate apps use the same database server with different schemas. Another example could be that a user might want to provision a giant k8s cluster on top of reserved instances(less costly) and let it be reused by different teams. An example from outside of k8s world could be Amazon EMR clusters where you reserve instances for cost reasons but different people submit different jobs that are completely independent. My point is our model should be able to cover stateless cloud resources as well as stateful ones. As long as teams and/or apps are aware that the resource is recycled, it looks OK to me.How could Crossplane help solve your problem?
Introduce a new reclaim policy option
Recycle
but do not go into business of cleaning up the managed resource. Document that this just makes the managed resource available to reclaim and let the user decides whether they want it or not.The text was updated successfully, but these errors were encountered: