-
-
Notifications
You must be signed in to change notification settings - Fork 25.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG+1] Issue#7998 : Consistent parameters between QDA and LDA #8130
Conversation
I think this one is looking okay. Please just be sure to close what is not current. Then if you can avoid opening new PRs, that would be great. |
Thank you, I will close them. Sorry for the inconveniences. |
Also, in general you can set the PR to any commit / branch / history you
like. In your PR branch, just use:
git reset --hard some_commit_ref
then
git push --force # add whatever args you need to push to your fork
…On 29 December 2016 at 21:52, JC Liu ***@***.***> wrote:
Thank you, I will close them. Sorry for the inconveniences.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#8130 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEz68jIRWwlxNuJ6HrS0EPHx_BvI0nbks5rM5DagaJpZM4LXDPN>
.
|
yes
…On 29 December 2016 at 22:03, JC Liu ***@***.***> wrote:
Okay, learned a lot from this process, thank you for your help.
And to my understanding #8130
<#8130> and #8134
<#8134> are both okay,
so I can close one of them? then make the future change in the remaining
one?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#8130 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEz6zrpWUK1fqj3zsO7Lr0KRTyq81tVks5rM5NzgaJpZM4LXDPN>
.
|
Okay, just close #8134 |
sklearn/discriminant_analysis.py
Outdated
@@ -130,6 +130,7 @@ def _class_cov(X, y, priors=None, shrinkage=None): | |||
|
|||
class LinearDiscriminantAnalysis(BaseEstimator, LinearClassifierMixin, | |||
TransformerMixin): | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please don't insert new lines into untouched parts of the code
sklearn/discriminant_analysis.py
Outdated
@@ -289,6 +291,9 @@ def _solve_lsqr(self, X, y, shrinkage): | |||
0-471-05669-3. | |||
""" | |||
self.means_ = _class_means(X, y) | |||
if not self.store_covariance: | |||
warnings.warn("'covariance_' will be stored only if " | |||
"'store_covariance' is True", DeprecationWarning) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add "from version 0.21"
sklearn/discriminant_analysis.py
Outdated
@@ -327,6 +332,9 @@ class scatter). This solver supports both classification and | |||
0-471-05669-3. | |||
""" | |||
self.means_ = _class_means(X, y) | |||
if not self.store_covariance: | |||
warnings.warn("'covariance_' will be stored only if " | |||
"'store_covariance' is True", DeprecationWarning) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add "from version 0.21"
sklearn/discriminant_analysis.py
Outdated
@@ -534,6 +542,7 @@ def predict_log_proba(self, X): | |||
|
|||
|
|||
class QuadraticDiscriminantAnalysis(BaseEstimator, ClassifierMixin): | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please remove blank line
sklearn/discriminant_analysis.py
Outdated
if self.store_covariances: | ||
if self.store_covariances is not None: | ||
warnings.warn("'store_covariances' was renamed to store_covariance" | ||
" in version 0.19 and will be removed in 0.20.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in 0.21
sklearn/discriminant_analysis.py
Outdated
warnings.warn("'store_covariances' was renamed to store_covariance" | ||
" in version 0.19 and will be removed in 0.20.", | ||
DeprecationWarning) | ||
self.store_covariance = self.store_covariances |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to use a local variable so that we don't modify the already set parameter.
def test_lda_store_covariance(): | ||
# The default is to not set the covariances_ attribute | ||
clf = LinearDiscriminantAnalysis().fit(X, y) | ||
assert_true(not hasattr(clf, 'covariance_')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perhaps assert_false!
@@ -223,6 +223,26 @@ def test_lda_scaling(): | |||
'using covariance: %s' % solver) | |||
|
|||
|
|||
def test_lda_store_covariance(): | |||
# The default is to not set the covariances_ attribute | |||
clf = LinearDiscriminantAnalysis().fit(X, y) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please test with each solver available.
np.array([[0.7, 0.45], [0.45, 0.7]]) | ||
) | ||
|
||
assert_array_almost_equal( | ||
clf.covariances_[1], | ||
clf.covariance_[1], | ||
np.array([[0.33333333, -0.33333333], [-0.33333333, 0.66666667]]) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also test the deprecation using assert_warns
@jnothman Just fix these, please check them |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't mind whether or not you do it here, but it can be useful to test deprecated functionality separately from testing continuing behaviour. This makes it easier to remove the relevant tests when the deprecation concludes.
Thanks!
# Test for slover 'lsqr' and 'eigen' | ||
for solver in ('lsqr', 'eigen'): | ||
clf = LinearDiscriminantAnalysis(solver=solver) | ||
assert_false(not hasattr(clf.fit(X, y), 'covariance_')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is assert_true(hasattr...)
, no?
sklearn/discriminant_analysis.py
Outdated
*store_covariance* has been moved to main constructor. | ||
|
||
*store_covariances* has been moved to the main constructor as | ||
*store_covaraiance* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
spelling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please also remove the blank line before.
sklearn/discriminant_analysis.py
Outdated
@@ -556,9 +563,15 @@ class QuadraticDiscriminantAnalysis(BaseEstimator, ClassifierMixin): | |||
Regularizes the covariance estimate as | |||
``(1-reg_param)*Sigma + reg_param*np.eye(n_features)`` | |||
|
|||
store_covariance : boolean | |||
If True the covariance matrices are computed and stored in the | |||
`self.covariance_` attribute. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use double-backticks for code formatting and you can drop self
sklearn/discriminant_analysis.py
Outdated
@@ -289,6 +290,9 @@ def _solve_lsqr(self, X, y, shrinkage): | |||
0-471-05669-3. | |||
""" | |||
self.means_ = _class_means(X, y) | |||
if not self.store_covariance: | |||
warnings.warn("from version 0.21 'covariance_' will be stored only" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should only be raised when covariance_
is accessed and present by error. I.e. do it with a property.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @jnothman Thanks for your review. But I'm not sure what this mean here. Can you give me more detail or examples?
Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean you will need something like:
@property
def covariance_(self):
if hasattr(self, '_covariance_') and self.store_covariance == False:
warnings.warn ...
return self._covariance_
It's a bit ugly, but still.
assert_false(not hasattr(clf.fit(X, y), 'covariance_')) | ||
|
||
# Test the deprecation | ||
assert_warns(DeprecationWarning, clf.fit, X, y) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be better to use assert_warns_message
to clarify which deprecation this is.
|
||
# Test the actual attribute: | ||
clf = QuadraticDiscriminantAnalysis(store_covariances=True).fit(X6, y6) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, we should check that store_covariances
still works but raises a deprecation warning.
Hi @jnothman , please check if I'm in the right direction. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
sklearn/discriminant_analysis.py
Outdated
@@ -188,7 +188,7 @@ class LinearDiscriminantAnalysis(BaseEstimator, LinearClassifierMixin, | |||
intercept_ : array, shape (n_features,) | |||
Intercept term. | |||
|
|||
covariance_ : array-like, shape (n_features, n_features) | |||
_covariance_ : array-like, shape (n_features, n_features) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the public attribute is still covariance_
sklearn/discriminant_analysis.py
Outdated
@property | ||
def covariance_(self): | ||
if hasattr(self, '_covariance_') and not self.store_covariance: | ||
warnings.warn("from version 0.21 '_covariance_' will be stored " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_covariance_
-> covariance_
sklearn/discriminant_analysis.py
Outdated
@covariance_.setter | ||
def covariance_(self, value): | ||
if hasattr(self, '_covariance_') and not self.store_covariance: | ||
warnings.warn("from version 0.21 '_covariance_' will be stored " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_covariance_
-> covariance_
sklearn/discriminant_analysis.py
Outdated
@@ -599,7 +616,7 @@ class QuadraticDiscriminantAnalysis(BaseEstimator, ClassifierMixin): | |||
>>> clf.fit(X, y) | |||
... # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE | |||
QuadraticDiscriminantAnalysis(priors=None, reg_param=0.0, | |||
store_covariances=False, tol=0.0001) | |||
store_covariance=False, store_covariances=None, tol=0.0001) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PEP8 visual indentation would have this line aligned with the opening parenthesis
sklearn/discriminant_analysis.py
Outdated
@property | ||
@deprecated("Attribute covariances_ was deprecated in version" | ||
" 0.19 and will be removed in 0.21. Use " | ||
"covariance_' instead") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove '
) | ||
|
||
assert_array_almost_equal( | ||
clf._covariance_[1], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_covariance_
-> covariance_
|
||
# Test for slover svd, the default is to not set the covariances_ attribute | ||
clf = LinearDiscriminantAnalysis(solver='svd').fit(X, y) | ||
assert_true(not hasattr(clf, '_covariance_')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_covariance_
-> covariance_
assert_false(hasattr
# Test the actual attribute: | ||
clf = LinearDiscriminantAnalysis(solver=solver, | ||
store_covariance=True).fit(X6, y6) | ||
assert_true(hasattr(clf, '_covariance_')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_covariance_
-> covariance_
assert_warns_message(DeprecationWarning, "from version 0.21 " | ||
"'_covariance_' will be stored only if " | ||
"'store_covariance' is True", | ||
clf.fit, X, y) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this warning should only trigger on getattr(clf.fit(X, y), 'covariance_')
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your review! I'm not sure is this means I need a deprecation in __getattr__
instead of now with property?
To my understanding we need covariance_
as a "private value" but not sure which way to achieve this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, the covariance_
attribute is public. It may present a public face to a __dict__
entry, _covariance_
, and do so by being implemented as a descriptor with a getter and setter. The getter is called whenever hasattr(obj, 'covariance_')
or getattr(obj, 'covariance_')
is called on an instance of LinearDiscriminantAnalysis
. If the getter raises AttribtueError
, hasattr
returns False; otherwise True. getattr
returns the result of the getter and propagates the exception.
# assert_warns(DeprecationWarning, clf.fit, X, y) | ||
assert_warns_message(DeprecationWarning, "'store_covariances' was renamed" | ||
" to store_covariance in version 0.19 and will be " | ||
"removed in 0.21.", clf.fit, X, y) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also check that covariance_
(and covariances_
with warning) is stored.
sklearn/discriminant_analysis.py
Outdated
@@ -255,6 +256,36 @@ def __init__(self, solver='svd', shrinkage=None, priors=None, | |||
self.store_covariance = store_covariance # used only in svd solver | |||
self.tol = tol # used only in svd solver | |||
|
|||
def __get__(self, obj, objtype): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think you mean this. defining __get__
turns the class into a descriptor...
Hi @jnothman , I misunderstood this, and I have just fixed it, please help me checking if I am in the right direction. Thanks! |
Hi, @jnothman , I think I'm stuck here, can you offer some more guidance? Thank you |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise LGTM
# Test for slover 'lsqr' and 'eigen' | ||
for solver in ('lsqr', 'eigen'): | ||
clf = LinearDiscriminantAnalysis(solver=solver) | ||
assert_true(hasattr(clf.fit(X, y), 'covariance_')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use assert_warns
on the hasattr
to check that this behaviour is deprecated appropriately.
Either do that here and remove the test below, or make this test only about non-deprecated behaviour and avoid this assertion with a comment saying that the opposite should be asserted once the deprecation is complete.
# check that covariance_ (and covariances_ with warning) is stored | ||
clf = QuadraticDiscriminantAnalysis(store_covariances=True).fit(X6, y6) | ||
assert_true(hasattr(clf, 'covariance_')) | ||
assert_true(hasattr(clf, 'covariances_')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do not want the tests issuing warnings. Drop this line and replace hasattr
with getattr
in the next statement to test the same thing but also catch the warning.
np.array([0.088889, 0.533333]) | ||
) | ||
|
||
# Test for slover svd, the default is to not set the covariances_ attribute |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test with the SVD solver
) | ||
|
||
# Test for slover svd, the default is to not set the covariances_ attribute | ||
clf = LinearDiscriminantAnalysis(solver='svd').fit(X, y) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why use X, y
instead of X6, y6
as in the other checks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Actually, there's no special purpose, I just change it to X, y
doc/whats_new.rst
Outdated
:issue:`8174` by :user:`Tahar Zanouda <tzano>`, `Alexandre Gramfort`_ | ||
and `Raghav RV`_. | ||
|
||
- SciPy >= 0.13.3 and NumPy >= 1.8.2 are now the minimum supported versions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like there is some unwanted changes in this diff. The only thing that should be added is your paragraph about store_covariances_
and covariances_
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, these seem caused by merge conflicts. I just fix it. Thanks!
@@ -245,6 +245,7 @@ class LinearDiscriminantAnalysis(BaseEstimator, LinearClassifierMixin, | |||
>>> print(clf.predict([[-0.8, -1]])) | |||
[1] | |||
""" | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like the docstring doesn't explain that store_covariance
only makes a difference if solver='svd'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it did last I looked
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
huh either it changed or I'm blind...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good apart from minor nitpicks.
def test_lda_store_covariance(): | ||
# Test for slover 'lsqr' and 'eigen' | ||
for solver in ('lsqr', 'eigen'): | ||
clf = LinearDiscriminantAnalysis(solver=solver).fit(X6, y6) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add a comment that store_covariance
has no effect on these solvers?
|
||
# check that covariance_ (and covariances_ with warning) is stored | ||
clf = QuadraticDiscriminantAnalysis(store_covariances=True).fit(X6, y6) | ||
assert_true(hasattr(clf, 'covariance_')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would remove this (because it creates a DeprecationWarning) and store the result of assert_warns_message
and ensure that it's True
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line should not raise a warning. The fit on the line before must. It should be changed to assert or ignore the warning
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some comments
sklearn/discriminant_analysis.py
Outdated
warnings.warn("'store_covariances' was renamed to store_covariance" | ||
" in version 0.19 and will be removed in 0.21.", | ||
DeprecationWarning) | ||
store_covariance = self.store_covariances |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To simplify the code, I think we should have:
store_covariance = self.store_covariance or self.store_covariances
if store_covariance:
# some code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but we can't figure out the value of store_covariance
is come from self.store_covariance
or self.store_covariances
, then we can't decide whether to raise a warning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only thing you need for the warning is self.store_covariances
.
|
||
assert_array_almost_equal( | ||
clf.covariance_[0], | ||
np.array([0.422222, 0.088889]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where do the expected covariance come from? Is there a way to compute them in a different way in order to make the test less magical?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not familiar with this part, you mean we should find another program or in sklearn we have another choice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any idea about this? @jnothman Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As long as the covariance computed by each solver is asserted equal, I think that's more than adequate. They are, after all, used within these techniques to produce the overall model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this split into two asserts? It's a single covariance matrix, right?
"removed in 0.21.", clf.fit, X, y) | ||
|
||
# check that covariance_ (and covariances_ with warning) is stored | ||
clf = QuadraticDiscriminantAnalysis(store_covariances=True).fit(X6, y6) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't need to redefine clf
here I think and can just reuse the existing clf
sklearn/discriminant_analysis.py
Outdated
def fit(self, X, y): | ||
"""Fit the model according to the given training data and parameters. | ||
|
||
.. versionchanged:: 0.19 | ||
*store_covariance* has been moved to main constructor. | ||
*store_covariances* has been moved to main constructor as |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improvement: store_covariances
is not a parameter of fit
anymore. Use the store_covariance
parameter in the constructor instead.
Also I would use double-backticks (monospace in reST) for the parameter names rather than asterisk (italic in reST).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what this means, we already have store_covariances
in the constructor. should we add store_covariance
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It means your formulation is not clear to me and I would use the one I proposed instead.
store_covariance=True).fit(X6, y6) | ||
assert_true(hasattr(clf, 'covariance_')) | ||
|
||
assert_array_almost_equal( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this split into two asserts?
self.tol = tol | ||
|
||
@property | ||
@deprecated("Attribute covariances_ was deprecated in version" | ||
" 0.19 and will be removed in 0.21. Use " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Deprecated in 0.20 and removed in 0.22?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jnothman How do you think about this?
I'd still be okay with merging this in 0.19 final. it's not controversial.
…On 1 Aug 2017 3:48 am, "JC Liu" ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In sklearn/discriminant_analysis.py
<#8130 (comment)>
:
> self.tol = tol
+ @Property
+ @deprecated("Attribute covariances_ was deprecated in version"
+ " 0.19 and will be removed in 0.21. Use "
@jnothman <https://github.com/jnothman> How do you think about this?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#8130 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEz64PWtyqeg9QYWd9gsGRQ2hc6mN3cks5sThNlgaJpZM4LXDPN>
.
|
ok cool. |
Thank you for your guidance! @jnothman @amueller @agramfort @ogrisel |
Thanks for your work. Sorry about leading you up the garden path! I'm glad
this is merged.
…On 2 August 2017 at 09:45, JC Liu ***@***.***> wrote:
Thank you for your guidance! @jnothman <https://github.com/jnothman>
@amueller <https://github.com/amueller> @agramfort
<https://github.com/agramfort> @ogrisel <https://github.com/ogrisel>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#8130 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEz6xP5m8oHQqiNPqe782M7oqBjYpyNks5sT7iagaJpZM4LXDPN>
.
|
…t-learn#8130) * for scikit-learn#7998 * Fix some style error and add test * Add local variable store_covariance * better deprecation * fix bug * Style check * fix covariance_ * style check * Update * modify test * Formating * update * Update * Add whats_new.rst * Revert "Add whats_new.rst" This reverts commit 4e5977d. * whats_new * Update for FutureWarning * Remove warning from the setter * add fit in test * drop back * Quick fix * Small fix * Fix * update new * Fix space * Fix docstring * fix style * Fix * fix assert
…t-learn#8130) * for scikit-learn#7998 * Fix some style error and add test * Add local variable store_covariance * better deprecation * fix bug * Style check * fix covariance_ * style check * Update * modify test * Formating * update * Update * Add whats_new.rst * Revert "Add whats_new.rst" This reverts commit 4e5977d. * whats_new * Update for FutureWarning * Remove warning from the setter * add fit in test * drop back * Quick fix * Small fix * Fix * update new * Fix space * Fix docstring * fix style * Fix * fix assert
…t-learn#8130) * for scikit-learn#7998 * Fix some style error and add test * Add local variable store_covariance * better deprecation * fix bug * Style check * fix covariance_ * style check * Update * modify test * Formating * update * Update * Add whats_new.rst * Revert "Add whats_new.rst" This reverts commit 4e5977d. * whats_new * Update for FutureWarning * Remove warning from the setter * add fit in test * drop back * Quick fix * Small fix * Fix * update new * Fix space * Fix docstring * fix style * Fix * fix assert
…t-learn#8130) * for scikit-learn#7998 * Fix some style error and add test * Add local variable store_covariance * better deprecation * fix bug * Style check * fix covariance_ * style check * Update * modify test * Formating * update * Update * Add whats_new.rst * Revert "Add whats_new.rst" This reverts commit 4e5977d. * whats_new * Update for FutureWarning * Remove warning from the setter * add fit in test * drop back * Quick fix * Small fix * Fix * update new * Fix space * Fix docstring * fix style * Fix * fix assert
…t-learn#8130) * for scikit-learn#7998 * Fix some style error and add test * Add local variable store_covariance * better deprecation * fix bug * Style check * fix covariance_ * style check * Update * modify test * Formating * update * Update * Add whats_new.rst * Revert "Add whats_new.rst" This reverts commit 4e5977d. * whats_new * Update for FutureWarning * Remove warning from the setter * add fit in test * drop back * Quick fix * Small fix * Fix * update new * Fix space * Fix docstring * fix style * Fix * fix assert
…t-learn#8130) * for scikit-learn#7998 * Fix some style error and add test * Add local variable store_covariance * better deprecation * fix bug * Style check * fix covariance_ * style check * Update * modify test * Formating * update * Update * Add whats_new.rst * Revert "Add whats_new.rst" This reverts commit 4e5977d. * whats_new * Update for FutureWarning * Remove warning from the setter * add fit in test * drop back * Quick fix * Small fix * Fix * update new * Fix space * Fix docstring * fix style * Fix * fix assert
…t-learn#8130) * for scikit-learn#7998 * Fix some style error and add test * Add local variable store_covariance * better deprecation * fix bug * Style check * fix covariance_ * style check * Update * modify test * Formating * update * Update * Add whats_new.rst * Revert "Add whats_new.rst" This reverts commit 4e5977d. * whats_new * Update for FutureWarning * Remove warning from the setter * add fit in test * drop back * Quick fix * Small fix * Fix * update new * Fix space * Fix docstring * fix style * Fix * fix assert
Reference Issue
Fix #7998
see also the previous one