[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update RBAC rules in docs #451

Merged
merged 5 commits into from
Feb 27, 2018
Merged

update RBAC rules in docs #451

merged 5 commits into from
Feb 27, 2018

Conversation

zparnold
Copy link
Contributor

This PR addresses #233. I have updated the example YAML's to have the appropriate RBAC permissions for K8s clusters running with RBAC (now enabled by default on new clusters.)

@k8s-ci-robot
Copy link
Contributor

Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA.

It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.


Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 29, 2018
@zparnold
Copy link
Contributor Author

I have signed the CLA

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Jan 29, 2018
@szuecs
Copy link
Contributor
szuecs commented Feb 1, 2018

@zparnold Thanks for the PR!
I think it would makes sense to split the "simple deployment" and the "RBAC deployment", do you agree?

@zparnold
Copy link
Contributor Author
zparnold commented Feb 1, 2018

@szuecs Totally! My one question would be, since RBAC is now the default for most cluster spin-up strategies, is it even going to be relevant to have an RBAC-less deployment in the future? I can obviously split it, but I felt that (even though it does add a bit of clutter) in general if someone starts with RBAC off and turns it on (like we did 😄 ) external DNS would still work for them. Does that make sense?

@szuecs
Copy link
Contributor
szuecs commented Feb 2, 2018

@zparnold I think it makes in any case to have both differentiated, because new users might only test and have no use for RBAC, or for example we use webhook as authnz method and do not use kubernetes RBAC.

@zparnold
Copy link
Contributor Author
zparnold commented Feb 2, 2018

@szuecs Sounds good! I'll make that adjustment. 😄

@ndarilek
Copy link
ndarilek commented Feb 4, 2018

Disclaimer: new at this, so I'm probably doing something wrong.

I read the Cloudflare tutorial docs in this PR, and had to use:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: external-dns-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: external-dns
subjects:
- kind: ServiceAccount
  name: external-dns
  namespace: default

Otherwise I get an error that the namespace field is required. Not sure if it has something to do with installing external-dns itself into a namespace.

But, when the pod launches, I get this in its logs:

time="2018-02-04T17:16:27Z" level=error msg="services is forbidden: User "system:serviceaccount:external-dns:default" cannot list services at the cluster scope" 

Again, not sure if that's a namespace issue or not, but I wanted to note my experiences as someone new who is attempting to use the docs in this PR. In general I don't know enough to know whether adding namespaces will break a thing or not, but I try to do it by default, since it seems like putting things in namespaces will make cleaning up my newbie mistakes easier, and is a best practice besides. :)

Thanks!

@szuecs
Copy link
Contributor
szuecs commented Feb 5, 2018

@zparnold Do you want to reply to @ndarilek ?
I think you might understand this better than me. :)
I am fine with this PR

@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Feb 7, 2018
@zparnold
Copy link
Contributor Author
zparnold commented Feb 7, 2018

Aha! Silly me! There is a mistake @ndarilek. When applying those series of Kubernetes Yaml manifests, it creates a service account for the deployment to use, but I never updated the deployment manifest to actually use that service account. (Hence your error.) 🤦‍♂️

Any-hoo, I updated the file so if you try now it should work. I put external-dns in my own kube-system namespace and adjusted the default service account to have these permissions, hence why I didn't remember. Apologies. Let me know if that works for you!

@szuecs
Copy link
Contributor
szuecs commented Feb 7, 2018

@zparnold thanks!
@linki lgtm please merge

@ndarilek
Copy link
ndarilek commented Feb 7, 2018

Still having almost the same problem, except I now monitor ingresses, and now my error message reads:

time="2018-02-07T09:44:23Z" level=error msg="ingresses.extensions is forbidden: User "system:serviceaccount:external-dns:external-dns" cannot list ingresses.extensions at the cluster scope" 

Got the same error when using services, so I doubt the ingress change matters. Here's my new config:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: external-dns
rules:
- apiGroups: [""]
  resources: ["services"]
  verbs: ["get","watch","list"]
- apiGroups: ["extensions"] 
  resources: ["ingresses"] 
  verbs: ["get","watch","list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: external-dns-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: external-dns
subjects:
- kind: ServiceAccount
  name: external-dns
  namespace: default
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: external-dns
spec:
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: external-dns
    spec:
      serviceAccountName: external-dns
      containers:
      - name: external-dns
        image: registry.opensource.zalan.do/teapot/external-dns:v0.4.8
        args:
        - --source=ingress # ingress is also possible
        - --provider=cloudflare
        - --cloudflare-proxied # (optional) enable the proxy feature of Cloudflare (DDOS protection, CDN...)
        env:
        - name: CF_API_KEY
          value: "NO"
        - name: CF_API_EMAIL
          value: "me@example.com"

Thanks. Using:

kubectl apply -f external-dns.yml -n external-dns

@szuecs
Copy link
Contributor
szuecs commented Feb 7, 2018

@ndarilek does the yaml you posted have changes from the provided ones in this PR?
Can you add your cluster version, please?
It might be also helpful to try to dig into the RBAC policy for ingress. I don't really use RBAC, but I guess it should be easy to change and test if these values make sense:

- apiGroups: ["extensions"] 
  resources: ["ingresses"] 
  verbs: ["get","watch","list"]

I already saw in PSPs that this can be quite complex and you have to dig into the logs of the authorization and test to find out what works and what not.

@ndarilek
Copy link
ndarilek commented Feb 7, 2018 via email

@szuecs
Copy link
Contributor
szuecs commented Feb 7, 2018

@ndarilek https://kubernetes.io/docs/imported/release/notes/#network can you check if the apiGroup "extensions" is the right one to list ingresses in 1.9?
They did not mention a change and the log you showed that the serviceaccount got the role and it should work, but I am also not familiar with RBAC to see a problem.
Maybe you can add more logs with more context that show the rolebinding and other requests that are done just before this failing one.

@ndarilek
Copy link
ndarilek commented Feb 7, 2018 via email

@linki linki self-requested a review February 7, 2018 12:54
@szuecs
Copy link
Contributor
szuecs commented Feb 7, 2018

@ndarilek we had a look again to your logs and the important part is that your deployment and serviceaccount does not match the same namespace. The first time you showed that external-dns was deployed with name "default" in namespace "external-dns", but the serviceaccount is in namespace "default" with name "external-dns", the second log shows that you changed the deployment to name "external-dns" and namespace "external-dns", but the serviceaccount stayed the same. A serviceaccount has to match the namespace of the deployment and has to be specified by name in the deployment.

First log you showed:

User "system:serviceaccount:external-dns:default"

Second log you showed:

User "system:serviceaccount:external-dns:external-dns"

@ndarilek
Copy link
ndarilek commented Feb 7, 2018 via email

@szuecs
Copy link
Contributor
szuecs commented Feb 7, 2018

@ndarilek From the logs we already see that the deployment has changed. The problem is that the serviceaccount does not exist in the namespace "external-dns".
You specified the namespace "default" in the serviceaccount definition which might override you parameter to kubectl "-n external-dns".
If you checked that and think you did all things right, then please show us all the runtime objects involved with the output of:

kubectl get deployment,role,rolebinding,cluster-role,cluster-role-binding external-dns -o yaml 

@ndarilek
Copy link
ndarilek commented Feb 7, 2018

Check out the second batch of yaml I posted this morning:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: external-dns

When I run kubectl get sa --namespace external-dns. I see default and external-dns service accounts. If I just run kubectl get sa, I only see default.

Sorry, I'm not trying to be difficult, but I don't understand what I'm doing wrong here, or why I wouldn't have a serviceaccount in the external-dns namespace based on the yaml I posted this morning. AFAICT, it is identical to what is in your docs now.

Thanks.

@zparnold
Copy link
Contributor Author
zparnold commented Feb 7, 2018

To piggyback on what @szuecs said, if you wanted to modify the RBAC version to work in the namespace external-dns you'd need to add the key:

namespace: external-dns

To the following:

  • The service account
  • The cluster role
  • The clusterrolebinding
  • The deployment

Have you verified that? As an example to work in the external-dns namespace the ServiceAccount would look like the following:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: external-dns
  namespace: external-dns

@ndarilek
Copy link
ndarilek commented Feb 7, 2018 via email

@zparnold
Copy link
Contributor Author
zparnold commented Feb 7, 2018

No worries! Glad it worked out. 😄 Sounds like we may have found an issue with CRB's that could be put in the master Kubernetes issues list.

@ndarilek
Copy link
ndarilek commented Feb 7, 2018 via email

@szuecs
Copy link
Contributor
szuecs commented Feb 7, 2018

@ndarilek nice you find the issue. In general I would always specify namespace in the files and do not change by kubectl apply or create, because you want to have configuration/deployment files in git and be easily to replay.

@zparnold thanks for making my request explicit. :)

@linki
Copy link
Member
linki commented Feb 12, 2018

@zparnold I updated your docs accordingly: fb68267

For me, a subject-reference without specifying a namespace (even when I want default) is rejected by the API server with:

The ClusterRoleBinding "external-dns-viewer" is invalid: subjects[0].namespace: Required value

Can you confirm this?

@zparnold
Copy link
Contributor Author
zparnold commented Feb 12, 2018 via email

@linki
Copy link
Member
linki commented Feb 12, 2018

@zparnold tested against 1.8 and 1.9

@linki
Copy link
Member
linki commented Feb 19, 2018

@zparnold I'm not sure why I only added the namespace to one of your examples 😫 Will fix that. Did you have a chance to confirm my issue when leaving it out?

@linki linki added the docs label Feb 21, 2018
Copy link
Member
@linki linki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@linki linki added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 26, 2018
@linki linki merged commit 3895277 into kubernetes-sigs:master Feb 27, 2018
@linki
Copy link
Member
linki commented Feb 27, 2018

Thank you @zparnold

grimmy pushed a commit to grimmy/external-dns that referenced this pull request Apr 10, 2018
* update RBAC rules in docs

* update docs with split between rbac and non

* make deployment use new sa

* docs: correctly reference service account in CRB

* docs: correctly reference service account in CRB in other docs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. docs lgtm "Looks good to me", indicates that a PR is ready to be merged. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants