[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance regression since RELEASE.2024-03-21T23-13-43Z with OIDC #19945

Closed
olevitt opened this issue Jun 18, 2024 · 4 comments
Closed

Performance regression since RELEASE.2024-03-21T23-13-43Z with OIDC #19945

olevitt opened this issue Jun 18, 2024 · 4 comments

Comments

@olevitt
Copy link
olevitt commented Jun 18, 2024

Performance on bucket listing (either with mc ls alias or minio console (session & buckets requests)) degrades exponentially with the number of buckets even with a dead simple policy when using openidconnect.
There is no performance issue when using the root credentials or when using a service account (but performance issues also happens with a service account derived from a OIDC user).

This issue may be somewhat related to #19746 and is a regression introduced in RELEASE.2024-03-21T23-13-43Z

Steps to Reproduce (for bugs)

  1. Create a new tenant with a version >= RELEASE.2024-03-21T23-13-43Z
  2. Configure OIDC on it
  3. Create and apply the corresponding policy. Can be as simple as :
{
  "Version": "2012-10-17",
  "Statement": [
   {
    "Effect": "Allow",
    "Action": [
     "s3:ListBucket"
    ],
    "Resource": [
     "arn:aws:s3:::mybucket"
    ]
   }
  ]
 }
  1. Login to the console using OIDC or use mc ls with OIDC credentials to list buckets, see it's fast
  2. Create buckets (even empty ones)
  3. Compare performance with more and more buckets

Do the same with a version older than RELEASE.2024-03-21T23-13-43Z to see that there was no performance degradation

@harshavardhana
Copy link
Member

Please test with latest release and also provide more details such as mc admin trace -a --response-duration 250ms alias/

@olevitt
Copy link
Author
olevitt commented Jul 1, 2024

Hi !
Thanks for your reply.
I confirm that as of the latest release (RELEASE.2024-06-29T01-20-47Z) issue is still very well present.
Tested it on an instance created exactly like defined in the steps to reproduce.
This test instance is not used at all, has no traffic, only empty buckets. Only usage is loading the minio-console using OIDC.
I was expecting some IAM / OIDC related logs or anything related to that in the traces but couldnt find any. Is there a way to have IAM specific traces ?
Here is the requested logs (mc admin trace -a --response-duration 250ms alias/) : debug-minio.txt

EDIT : It seems there is a request to minio/admin/v3/accountinfo that takes about 1 minute : 2024-07-01T10:27:41.359 [200 OK] admin.AccountInfo minio-pool-0-3.minio-hl.minio-test.svc.cluster.local:9000/minio/admin/v3/accountinfo 192.168.253.171 52.011639s ⇣ 52.011629403s ↑ 259 B ↓ 19 KiB

@harshavardhana
Copy link
Member

A simple Mkdir takes a second on your backend drives and yet this cluster doesn't get any I/O

2024-07-01T10:28:57.157 [OS] os.Mkdir minio-pool-0-2.minio-hl.minio-test.svc.cluster.local:9000 /export/.minio.sys/tmp/ec575335-c692-4b0f-9c73-0b21e2829bc2 1.488306341s
2024-07-01T10:28:58.217 [OS] os.Mkdir minio-pool-0-2.minio-hl.minio-test.svc.cluster.local:9000 /export/.minio.sys/tmp/c253f2ad-0fba-4671-a6e6-fb315e7db0e0 428.641645ms

Along with that I see that you have created 1000's of buckets.

2024-07-01T10:28:27.414 [OS] os.Mkdir minio-pool-0-2.minio-hl.minio-test.svc.cluster.local:9000 /export/.minio.sys/tmp/c6552d1d-b1a1-4ddd-b54c-747df9b3ce27 439.087145ms
2024-07-01T10:28:27.415 [OS] os.Mkdir minio-pool-0-2.minio-hl.minio-test.svc.cluster.local:9000 /export/.minio.sys/tmp/9136a0a1-e1eb-4c54-9ffc-da94dbd957a9 439.111829ms
2024-07-01T10:28:27.418 [OS] os.Mkdir minio-pool-0-2.minio-hl.minio-test.svc.cluster.local:9000 /export/.minio.sys/tmp/.trash 435.625288ms
2024-07-01T10:28:28.411 [OS] os.Mkdir minio-pool-0-2.minio-hl.minio-test.svc.cluster.local:9000 /export/.minio.sys/tmp/3d046c87-d5e2-4206-bb94-a2c2cc2fe68c 1.386060402s
2024-07-01T10:28:28.413 [OS] os.Mkdir minio-pool-0-2.minio-hl.minio-test.svc.cluster.local:9000 /export/.minio.sys/tmp/6994d1c4-4f31-41b6-ba7f-ee0453745a0c 1.384432453s
2024-07-01T10:28:28.428 [OS] os.Mkdir minio-pool-0-2.minio-hl.minio-test.svc.cluster.local:9000 /export/.minio.sys/tmp/.trash 1.369490772s
2024-07-01T10:28:28.413 [OS] os.Mkdir minio-pool-0-0.minio-hl.minio-test.svc.cluster.local:9000 /export/.minio.sys/tmp/3d046c87-d5e2-4206-bb94-a2c2cc2fe68c 1.386941821s
2024-07-01T10:28:28.415 [OS] os.Mkdir minio-pool-0-0.minio-hl.minio-test.svc.cluster.local:9000 /export/.minio.sys/tmp/6994d1c4-4f31-41b6-ba7f-ee0453745a0c 1.38498268s
2024-07-01T10:28:28.430 [OS] os.Mkdir minio-pool-0-0.minio-hl.minio-test.svc.cluster.local:9000 /export/.minio.sys/tmp/.trash 1.370495461s

These are unusable drives pretty much trash IMHO. I hope you are using some local drives for this, from the I/O performance I can see that this is some kind of a network backend.

@harshavardhana
Copy link
Member

I am closing this as an environmental issue, not a MinIO-specific issue. The drives are slow for any useful I/O.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants