[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH Consistent loss name for absolute error #19733

Merged
merged 13 commits into from
May 10, 2021

Conversation

lorentzenchr
Copy link
Member

Reference Issues/PRs

Partially solves #18248.

What does this implement/fix? Explain your changes.

This PR renames all variations of absolute error to "absolute_error".

@lorentzenchr lorentzenchr added this to the 1.0 milestone Mar 20, 2021
Copy link
Member
@thomasjpfan thomasjpfan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for continuing this work @lorentzenchr !

Mostly looks good. I think we still need to update BaseEnsemble._make_estimator similar to what we did here:

if (
isinstance(estimator, (DecisionTreeRegressor, ExtraTreeRegressor))
and getattr(estimator, "criterion", None) == "mse"
):

sklearn/ensemble/tests/test_forest.py Outdated Show resolved Hide resolved
sklearn/ensemble/tests/test_forest.py Outdated Show resolved Hide resolved
sklearn/ensemble/tests/test_gradient_boosting.py Outdated Show resolved Hide resolved
sklearn/linear_model/tests/test_ransac.py Outdated Show resolved Hide resolved
@@ -194,26 +194,26 @@ def test_should_stop(scores, n_iter_no_change, tol, stopping):
assert gbdt._should_stop(scores) == stopping


def test_least_absolute_deviation():
def test_absolute_error():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For deprecations, we normally keep the original code and then configure the test to ignore the FutureWarnings. This is to make sure that we still support the original behavior during the deprecation period.

# TODO: Remove absolute_error in 1.2 when it is removed
@pytest.mark.filterwarnings("ignore::FutureWarning")
@pytest.mark.parametrize("loss", ["least_absolute_deviation", "absolute_error"])
def test_absolute_error(loss):

For this specific case, there is an argument for just replacing the name, since it is a simple name change and adding another loss that does exactly the same thing would increase the runtime of tests.

I am +0.25 with replacing the name directly in the tests as you already done in this PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good to know for future deprecations, although it's not my intend to become a deprecation master:smirk:

Equivalence of new and old models' predict is always tested in test_loss_deprecated. So how is your rounding policy (and where is the 0.75 going to)?

Thanks for your support on this little project!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So how is your rounding policy (and where is the 0.75 going to)?

It's more of an "absence of 0.75". Thinking about this again, I am now at +0.75 🤣.

Copy link
Member
@thomasjpfan thomasjpfan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking forward to consistent names in 1.2 !

@@ -194,26 +194,26 @@ def test_should_stop(scores, n_iter_no_change, tol, stopping):
assert gbdt._should_stop(scores) == stopping


def test_least_absolute_deviation():
def test_absolute_error():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So how is your rounding policy (and where is the 0.75 going to)?

It's more of an "absence of 0.75". Thinking about this again, I am now at +0.75 🤣.

Copy link
Member
@rth rth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @lorentzenchr ! LGTM, the merge conflict would need resolving.

@lorentzenchr
Copy link
Member Author

the merge conflict would need resolving.

@rth Done.

@lorentzenchr
Copy link
Member Author

The single CI failure is because the test took too long. I think it's safe to merge. Let me know if you prefer me to trigger CI again.

@lorentzenchr
Copy link
Member Author

@rth @thomasjpfan This has 2 approvals, CI all green. Someone might merge?

@rth
Copy link
Member
rth commented May 10, 2021

Thanks @lorentzenchr !

@rth rth merged commit 2bd3a4d into scikit-learn:main May 10, 2021
@lorentzenchr lorentzenchr deleted the consistent_absolute_error branch May 10, 2021 20:11
eddiebergman added a commit to automl/auto-sklearn that referenced this pull request Nov 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants