-
-
Notifications
You must be signed in to change notification settings - Fork 25.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changed layer number and learning rate init to make execution of plot_mnist_filters.py quicker #21647
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for the improvement.
While you are at it, could you please edit the comments to expand the CI acronym from "because of CI’s time constraints" to "because of resource usage constraints on our Continuous Integration infrastructure that is used to build this documentation on a regular basis." |
The comment in the code says that this doesn't converge, does it still not converge? There's also a mistake in the comment, there's an extra "are" there, you can fix that too. Also, please mention the absolute times before and after. |
@ogrisel Edited your improved explanation into the docstring |
@adrinjalali Yes, rest should be good, will delete the additional "are" and also paste in some new run times |
@adrinjalali @ogrisel I deleted the additional "are" and added @ogrisel`s comment also in that line since it was the same as in the docstring, so I figured to put the new explanation in both lines (docstring and single line comment) |
I think we can get a better speedup here if we use a much smaller part of the data as training set. Could you please use |
@adrinjalali Okay, I'll get on it, soon :) |
@adrinjalali We get the following results: With this code X_train, X_test = X[:60000], X[60000:] and with this code X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, test_size=0.7) we get this result: So, you were right. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, you okay with the changes @ogrisel ?
Looking at the changes since @ogrisel 's approval, I think we can merge, thanks @sveneschlbeck |
…plot_mnist_filters.py quicker (scikit-learn#21647) * Changed layer number and learning rate init to make execution of example faster * Update plot_mnist_filters.py * Update plot_mnist_filters.py * Update plot_mnist_filters.py * Update plot_mnist_filters.py
…plot_mnist_filters.py quicker (scikit-learn#21647) * Changed layer number and learning rate init to make execution of example faster * Update plot_mnist_filters.py * Update plot_mnist_filters.py * Update plot_mnist_filters.py * Update plot_mnist_filters.py
…plot_mnist_filters.py quicker (scikit-learn#21647) * Changed layer number and learning rate init to make execution of example faster * Update plot_mnist_filters.py * Update plot_mnist_filters.py * Update plot_mnist_filters.py * Update plot_mnist_filters.py
…plot_mnist_filters.py quicker (#21647) * Changed layer number and learning rate init to make execution of example faster * Update plot_mnist_filters.py * Update plot_mnist_filters.py * Update plot_mnist_filters.py * Update plot_mnist_filters.py
#21598 @adrinjalali
Tuned down the number of layers and adapted the
learning_rate_init
andmax_iter
params to make it between 15 and 20 sec quicker with almost no quality loss and without changing the learning effect or character of the example.