-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update RBAC rules in docs #451
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
I have signed the CLA |
@zparnold Thanks for the PR! |
@szuecs Totally! My one question would be, since RBAC is now the default for most cluster spin-up strategies, is it even going to be relevant to have an RBAC-less deployment in the future? I can obviously split it, but I felt that (even though it does add a bit of clutter) in general if someone starts with RBAC off and turns it on (like we did 😄 ) external DNS would still work for them. Does that make sense? |
@zparnold I think it makes in any case to have both differentiated, because new users might only test and have no use for RBAC, or for example we use webhook as authnz method and do not use kubernetes RBAC. |
@szuecs Sounds good! I'll make that adjustment. 😄 |
Disclaimer: new at this, so I'm probably doing something wrong. I read the Cloudflare tutorial docs in this PR, and had to use:
Otherwise I get an error that the namespace field is required. Not sure if it has something to do with installing external-dns itself into a namespace. But, when the pod launches, I get this in its logs:
Again, not sure if that's a namespace issue or not, but I wanted to note my experiences as someone new who is attempting to use the docs in this PR. In general I don't know enough to know whether adding namespaces will break a thing or not, but I try to do it by default, since it seems like putting things in namespaces will make cleaning up my newbie mistakes easier, and is a best practice besides. :) Thanks! |
Aha! Silly me! There is a mistake @ndarilek. When applying those series of Kubernetes Yaml manifests, it creates a service account for the deployment to use, but I never updated the deployment manifest to actually use that service account. (Hence your error.) 🤦♂️ Any-hoo, I updated the file so if you try now it should work. I put external-dns in my own |
Still having almost the same problem, except I now monitor ingresses, and now my error message reads:
Got the same error when using services, so I doubt the ingress change matters. Here's my new config:
Thanks. Using:
|
@ndarilek does the yaml you posted have changes from the provided ones in this PR?
I already saw in PSPs that this can be quite complex and you have to dig into the logs of the authorization and test to find out what works and what not. |
I think it's identical, save for removing the domain suffix and
switching "service" to "ingress" as documented in one of the comments.
```
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2",
GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b",
GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z",
GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
```
|
@ndarilek https://kubernetes.io/docs/imported/release/notes/#network can you check if the apiGroup "extensions" is the right one to list ingresses in 1.9? |
I think that's correct for ingresses, I have one that looks like:
```
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
...
```
and works.
By logs, do you mean output of the kubectl command? Everything is
created, I don't see any errors/non-created resources. I can provide
other logs if needed, I'm just new at this and don't know how to get
anything other than pod logs.
Thanks.
|
@ndarilek we had a look again to your logs and the important part is that your deployment and serviceaccount does not match the same namespace. The first time you showed that external-dns was deployed with name "default" in namespace "external-dns", but the serviceaccount is in namespace "default" with name "external-dns", the second log shows that you changed the deployment to name "external-dns" and namespace "external-dns", but the serviceaccount stayed the same. A serviceaccount has to match the namespace of the deployment and has to be specified by name in the deployment. First log you showed:
Second log you showed:
|
Right, I get that. THe second log was after making my yaml look like my
second comment (I.e. adding:
```
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: external-dns
spec:
# Added this as per doc
serviceAccountName: external-dns
...
```
I'm deploying with:
`kubectl apply -f externaldns.yml -n external-dns`
The serviceaccount definitely exists in the external-dns namespace.
Thanks for your help.
|
@ndarilek From the logs we already see that the deployment has changed. The problem is that the serviceaccount does not exist in the namespace "external-dns".
|
Check out the second batch of yaml I posted this morning:
When I run Sorry, I'm not trying to be difficult, but I don't understand what I'm doing wrong here, or why I wouldn't have a serviceaccount in the external-dns namespace based on the yaml I posted this morning. AFAICT, it is identical to what is in your docs now. Thanks. |
To piggyback on what @szuecs said, if you wanted to modify the RBAC version to work in the namespace namespace: external-dns To the following:
Have you verified that? As an example to work in the apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
namespace: external-dns |
OK, figured it out. The issue was in the ClusterRoleBinding, which had:
```
namespace: default
```
Not sure why that has to be hard-coded and can't come from the -n option
to kubectl, but changing that seems to make things stop failing.
Thanks for all your patience!
|
No worries! Glad it worked out. 😄 Sounds like we may have found an issue with CRB's that could be put in the master Kubernetes issues list. |
Please do let me know if you do that. I'm hesitant to file issues like
that because I don't know how much sense they make. At the same time,
the fact that I have to hard-code a namespace in CRBs and not other
things, and the fact that sometimes I have to run `kubectl create ns
...` vs. just applying a resource with a namespace, has been a stumbling
block for me.
|
What version of Kubernetes/kubectl are you running?
…On Mon, Feb 12, 2018, 5:11 PM Martin Linkhorst ***@***.***> wrote:
I updated your docs accordingly: fb68267
<fb68267>
For me, a subject-reference without specifying a namespace (even when I
want default) is rejected by the API server with:
The ClusterRoleBinding "external-dns-viewer" is invalid: subjects[0].namespace: Required value
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#451 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AE81SIuSZNNuRMzQMKLUAtgD9kVdhbqRks5tUFSPgaJpZM4RxZhx>
.
|
@zparnold tested against 1.8 and 1.9 |
@zparnold I'm not sure why I only added the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Thank you @zparnold |
* update RBAC rules in docs * update docs with split between rbac and non * make deployment use new sa * docs: correctly reference service account in CRB * docs: correctly reference service account in CRB in other docs
This PR addresses #233. I have updated the example YAML's to have the appropriate RBAC permissions for K8s clusters running with RBAC (now enabled by default on new clusters.)