[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

migration state: add source pod and failure reason #11330

Merged
merged 2 commits into from
Feb 26, 2024

Conversation

jean-edouard
Copy link
Contributor

What this PR does

Before this PR:
Migration state has no source pod and no failure reason

After this PR:
Migration state has a source pod and a failure reason

Fixes #

Why we need it and why it was done in this way

The following tradeoffs were made:

The following alternatives were considered:

Links to places where the discussion took place:

Special notes for your reviewer

Checklist

This checklist is not enforcing, but it's a reminder of items that could be relevant to every PR.
Approvers are expected to review this list.

Release note

More information in the migration state of VMI / migration objects

@kubevirt-bot kubevirt-bot added release-note Denotes a PR that will be considered when it comes time to generate release notes. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. labels Feb 21, 2024
@kubevirt-bot kubevirt-bot added kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API size/M labels Feb 21, 2024
@jean-edouard
Copy link
Contributor Author

/cc @vladikr

@jean-edouard
Copy link
Contributor Author

/hold
I broke unit tests

@kubevirt-bot kubevirt-bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 21, 2024
@jean-edouard
Copy link
Contributor Author

/hold cancel

@kubevirt-bot kubevirt-bot added size/L and removed do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. size/M labels Feb 22, 2024
@@ -837,7 +837,8 @@ func (c *MigrationController) handleMarkMigrationFailedOnVMI(migration *virtv1.V
return err
}
log.Log.Object(vmi).Infof("Marked Migration %s/%s failed on vmi due to target pod disappearing before migration kicked off.", migration.Namespace, migration.Name)
c.recorder.Event(vmi, k8sv1.EventTypeWarning, FailedMigrationReason, fmt.Sprintf("VirtualMachineInstance migration uid %s failed. reason: target pod is down", string(migration.UID)))
vmiCopy.Status.MigrationState.FailureReason = "Target pod is down"
c.recorder.Event(vmi, k8sv1.EventTypeWarning, FailedMigrationReason, fmt.Sprintf("VirtualMachineInstance migration uid %s failed. reason: %s", string(migration.UID), vmiCopy.Status.MigrationState.FailureReason))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way that we will already have a Failure Reason from virt-handler at this point?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that's possible. What do you suggest?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't creating the event for the migration object solve this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Barakmor1 I think these are two separate records. We need to store this info in the status for convenience. That way everything related to this migration is stored in one place.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that's possible. What do you suggest?

@jean-edouard I think we can merge. To simply check whether MigrationState.FailureReason has something already and add this message. wdyt?

migration := libmigration.New(vmi.Name, vmi.Namespace)
migration = libmigration.RunMigrationAndExpectToCompleteWithDefaultTimeout(virtClient, migration)
By("Checking VMI, confirm migration state")
libmigration.ConfirmVMIPostMigration(virtClient, vmi, migration)
vmi = libmigration.ConfirmVMIPostMigration(virtClient, vmi, migration)
Expect(vmi.Status.MigrationState.SourcePod).To(Equal(sourcePod.Name))
Copy link
Member
@Barakmor1 Barakmor1 Feb 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:
ExpectWithOffset would be better in that case

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rest of the function doesn't use WithOffset, and I would argue that it's a good thing.
WithOffset effectively obfuscates which line of the function tripped. and this is a massive function called once by 2 different tests.
The code for the first test that uses the function is 2 lines long, and the second one has just 1 line: the call to this function!
So, in this case, moving the failure up one level effectively makes the line number meaningless.
WDYT?

Copy link
Member
@Barakmor1 Barakmor1 Feb 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds right :)

@@ -346,6 +346,21 @@ func SetVMIMigrationPhaseTransitionTimestamp(oldVMIMigration *v1.VirtualMachineI
}
}

func SetSourcePod(migration *v1.VirtualMachineInstanceMigration, vmi *v1.VirtualMachineInstance, podInformer cache.SharedIndexInformer) {
if migration.Status.Phase == v1.MigrationPending {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

if migration.Status.Phase != v1.MigrationPending {
  return
}

Signed-off-by: Jed Lejosne <jed@redhat.com>
Signed-off-by: Jed Lejosne <jed@redhat.com>
@Barakmor1
Copy link
Member

/lgtm

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Feb 26, 2024
@kubevirt-bot
Copy link
Contributor
kubevirt-bot commented Feb 26, 2024

@jean-edouard: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubevirt-check-tests-for-flakes 563d33d link false /test pull-kubevirt-check-tests-for-flakes

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@vladikr
Copy link
Member
vladikr commented Feb 26, 2024

/approve

Looks good to me.

@vladikr
Copy link
Member
vladikr commented Feb 26, 2024

Thanks @jean-edouard !

@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: vladikr

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 26, 2024
@kubevirt-commenter-bot
Copy link

Required labels detected, running phase 2 presubmits:
/test pull-kubevirt-e2e-windows2016
/test pull-kubevirt-e2e-kind-1.27-vgpu
/test pull-kubevirt-e2e-kind-sriov
/test pull-kubevirt-e2e-k8s-1.29-ipv6-sig-network
/test pull-kubevirt-e2e-k8s-1.27-sig-network
/test pull-kubevirt-e2e-k8s-1.27-sig-storage
/test pull-kubevirt-e2e-k8s-1.27-sig-compute
/test pull-kubevirt-e2e-k8s-1.27-sig-operator
/test pull-kubevirt-e2e-k8s-1.28-sig-network
/test pull-kubevirt-e2e-k8s-1.28-sig-storage
/test pull-kubevirt-e2e-k8s-1.28-sig-compute
/test pull-kubevirt-e2e-k8s-1.28-sig-operator

@kubevirt-bot kubevirt-bot merged commit 22aa13d into kubevirt:main Feb 26, 2024
36 of 37 checks passed
@vladikr
Copy link
Member
vladikr commented Feb 27, 2024

@jean-edouard should we backport this?

@jean-edouard
Copy link
Contributor Author

/cherry-pick release-1.2

@kubevirt-bot
Copy link
Contributor

@jean-edouard: new pull request created: #11371

In response to this:

/cherry-pick release-1.2

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API lgtm Indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants