[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FS gateway migration is not reliable #20018

Closed
Haarolean opened this issue Jun 30, 2024 · 4 comments
Closed

FS gateway migration is not reliable #20018

Haarolean opened this issue Jun 30, 2024 · 4 comments

Comments

@Haarolean
Copy link

Expected Behavior

The described migration should work.

Current Behavior

The described migration doesn't work with many issues.

Steps to Reproduce (for bugs)

I have the following setup:

  1. An older deployment with minio/minio:RELEASE.2022-06-25T15-50-16Z image
  2. A newer deployment (prepared for migration) with minio/minio:RELEASE.2022-10-24T18-35-07Z
  3. I've been using minio client minio/mc:RELEASE.2022-10-22T03-39-29Z (the closest I could find to the server release dates)

Exporting basically anything (tried server configs and bucket metadata) doesn't work.

Issue 1:

mc admin config export old > config.txt produces the following config (basically, should be default as I don't remember altering anything):

site name= region=
# cache drives= exclude= expiry=90 quota=80 after=0 watermark_low=70 watermark_high=80 range=on commit=writethrough
# compression enable=off extensions=.txt,.log,.csv,.json,.tar,.xml,.bin mime_types=text/*,application/json,application/xml allow_encryption=off
# etcd endpoints= path_prefix= coredns_path=/skydns client_cert= client_cert_key=
# identity_openid enable= display_name= config_url= client_id= client_secret= claim_name=policy claim_userinfo= role_policy= claim_prefix= redirect_uri= redirect_uri_dynamic=off scopes=
# identity_ldap server_addr= group_search_filter= group_search_base_dn= tls_skip_verify=off server_insecure=off server_starttls=off user_dn_search_base_dn= user_dn_search_filter= lookup_bind_dn= lookup_bind_password= lookup_bind_password= lookup_bind_password= lookup_bind_password= lookup_bind_password= lookup_bind_password=
# identity_tls skip_verify=off
# identity_plugin url= auth_token= role_policy= role_id=
# policy_plugin url= auth_token=
api requests_max=0 requests_deadline=10s cluster_deadline=10s cors_allow_origin=* remote_transport_deadline=2h list_quorum=strict replication_workers=100 replication_failed_workers=8 transition_workers=100 stale_uploads_cleanup_interval=6h stale_uploads_expiry=24h delete_cleanup_interval=5m disable_odirect=off gzip_objects=off gzip_objects=off
heal bitrotscan=off max_sleep=1s max_io=10
scanner delay=10 max_wait=15s cycle=1m
# logger_webhook enable=off endpoint= auth_token= client_cert= client_key= queue_size=100000
# audit_webhook enable=off endpoint= auth_token= client_cert= client_key= queue_size=100000
# audit_kafka enable=off topic= brokers= sasl_username= sasl_password= sasl_mechanism=plain client_tls_cert= client_tls_key= tls_client_auth=0 sasl=off tls=off tls_skip_verify=off version=
# notify_webhook enable=off endpoint= auth_token= queue_limit=0 queue_dir= client_cert= client_key=
# notify_amqp enable=off url= exchange= exchange_type= routing_key= mandatory=off durable=off no_wait=off internal=off auto_deleted=off delivery_mode=0 queue_limit=0 queue_dir= publisher_confirms=off
# notify_kafka enable=off topic= brokers= sasl_username= sasl_password= sasl_mechanism=plain client_tls_cert= client_tls_key= tls_client_auth=0 sasl=off tls=off tls_skip_verify=off queue_limit=0 queue_dir= version=
# notify_mqtt enable=off broker= topic= password= username= qos=0 keep_alive_interval=0s reconnect_interval=0s queue_dir= queue_limit=0
# notify_nats enable=off address= subject= username= password= token= tls=off tls_skip_verify=off cert_authority= client_cert= client_key= ping_interval=0 streaming=off streaming_async=off streaming_max_pub_acks_in_flight=0 streaming_cluster_id= queue_dir= queue_limit=0
# notify_nsq enable=off nsqd_address= topic= tls=off tls_skip_verify=off queue_dir= queue_limit=0
# notify_mysql enable=off format=namespace dsn_string= table= queue_dir= queue_limit=0 max_open_connections=2
# notify_postgres enable=off format=namespace connection_string= table= queue_dir= queue_limit=0 max_open_connections=2
# notify_elasticsearch enable=off url= format=namespace index= queue_dir= queue_limit=0 username= password=
# notify_redis enable=off format=namespace address= key= password= queue_dir= queue_limit=0
subnet license= api_key= proxy=
# callhome enable=off frequency=24h

Importing this config, however, doesn't work:

sh-4.4# mc admin config import new < config.txt
mc: <ERROR> Unable to set server config: sub-system 'heal' cannot have empty keys.

Commenting out 'heal' doesn't work either:

$ mc admin config import new < config.txt
mc: <ERROR> Unable to set server config: invalid value for list strict quorum.

Removing list_quorum=strict from the config doesn't help:

mc: <ERROR> Unable to set server config: time: unknown unit "h replication_workers=" in duration "2h replication_workers=100 replication_failed_workers=8".

This is where I gave up with this step and tried to do the remaining rest.

Issue 2

The next order of business was to try exporting bucket metadata:

sh-4.4# mc admin cluster bucket export old
mc: <ERROR> Unable to export bucket metadata. Failed to parse server response (unexpected end of JSON input):.

Don't have a foggiest where to dig from here.

Context

How has this issue affected you?

Spent hours trying to migrate this old deployment with no success

Regression

Your Environment

Mentioned all the versions above

Copy link
Contributor
jiuker commented Jun 30, 2024

Please upgrade your setup

@jiuker jiuker closed this as completed Jun 30, 2024
@Haarolean
Copy link
Author

Please upgrade your setup

@jiuker that's exactly what I am trying to do?

@harshavardhana
Copy link
Member

Please upgrade your setup

@jiuker that's exactly what I am trying to do?

Migration requires that you move to a different setup first.

https://min.io/docs/minio/linux/operations/install-deploy-manage/deploy-minio-single-node-single-drive.html#minio-snsd
Read the documentation on this

@Haarolean
Copy link
Author

@harshavardhana that's exactly what I've been doing: running two containers for the old setup and a new one (with a completely different volume bind for a new setup).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants