-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
spack solve
: add a --profile
option
#46449
base: develop
Are you sure you want to change the base?
Conversation
3c98f4e
to
94eabb2
Compare
@trws FYI |
8ec2ab6
to
20fb42b
Compare
20fb42b
to
d20d1ed
Compare
self._profile = {} | ||
for atom in init.symbolic_atoms: | ||
solver_literal = init.solver_literal(atom.literal) | ||
self._profile[solver_literal] = Data(atom, solver_literal, 0, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm doing something similar (manually) to try getting some understanding of heuristic. In that case I override the decide
method.
I think this is wrong, in the sense that each solver_literal
may be associated with many atom
, so the structure should be more:
self._profile[solver_literal] = Data(atom, solver_literal, 0, 0) | |
self._profile[solver_literal].append(Data(atom, solver_literal, 0, 0)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really understanding this b/c AFAIK each literal maps to exactly one (fully grounded) atom. There's one Data
per literal/atom. Can you elaborate on this?
If you want, we could track decisions as well -- could it just be a third column?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the representation of the grounder is not the same as the one of the solver.
Each symbolic atom maps to a solver literal, but they may map to the same one (I guess if they can be proven t be always true/false together). Also, I think there are solver literals that do not map to symbolic atom and are just used as an "internal" representation for the solver.
`spack spec` output has looked like this for a while: ```console > spack spec /v5fn6xo /wd2p2v7 Input spec -------------------------------- - /v5fn6xo Concretized -------------------------------- [+] openssl@3.3.1%apple-clang@16.0.0~docs+shared build_system=generic certs=mozilla arch=darwin-sequoia-m1 [+] ^ca-certificates-mozilla@2023-05-30%apple-clang@16.0.0 build_system=generic arch=darwin-sequoia-m1 ... Input spec -------------------------------- - /wd2p2v7 Concretized -------------------------------- [+] py-six@1.16.0%apple-clang@16.0.0 build_system=python_pip arch=darwin-sequoia-m1 [+] ^py-pip@23.1.2%apple-clang@16.0.0 build_system=generic arch=darwin-sequoia-m1 ``` But the input spec is right there on the CLI, and it doesn't add anything to the output. Also, since #44843, specs concretized in the CLI line can be unified, so it makes sense to display them as we did in #44489 -- as one multi-root tree instead of as multiple single-root trees. With this PR, concretize output now looks like this: ```console > spack spec /v5fn6xo /wd2p2v7 [+] openssl@3.3.1%apple-clang@16.0.0~docs+shared build_system=generic certs=mozilla arch=darwin-sequoia-m1 [+] ^ca-certificates-mozilla@2023-05-30%apple-clang@16.0.0 build_system=generic arch=darwin-sequoia-m1 [+] ^gmake@4.4.1%apple-clang@16.0.0~guile build_system=generic arch=darwin-sequoia-m1 [+] ^perl@5.40.0%apple-clang@16.0.0+cpanm+opcode+open+shared+threads build_system=generic arch=darwin-sequoia-m1 [+] ^berkeley-db@18.1.40%apple-clang@16.0.0+cxx~docs+stl build_system=autotools patches=26090f4,b231fcc arch=darwin-sequoia-m1 [+] ^bzip2@1.0.8%apple-clang@16.0.0~debug~pic+shared build_system=generic arch=darwin-sequoia-m1 [+] ^diffutils@3.10%apple-clang@16.0.0 build_system=autotools arch=darwin-sequoia-m1 [+] ^libiconv@1.17%apple-clang@16.0.0 build_system=autotools libs=shared,static arch=darwin-sequoia-m1 [+] ^gdbm@1.23%apple-clang@16.0.0 build_system=autotools arch=darwin-sequoia-m1 [+] ^readline@8.2%apple-clang@16.0.0 build_system=autotools patches=bbf97f1 arch=darwin-sequoia-m1 [+] ^ncurses@6.5%apple-clang@16.0.0~symlinks+termlib abi=none build_system=autotools patches=7a351bc arch=darwin-sequoia-m1 [+] ^pkgconf@2.2.0%apple-clang@16.0.0 build_system=autotools arch=darwin-sequoia-m1 [+] ^zlib-ng@2.2.1%apple-clang@16.0.0+compat+new_strategies+opt+pic+shared build_system=autotools arch=darwin-sequoia-m1 [+] ^gnuconfig@2022-09-17%apple-clang@16.0.0 build_system=generic arch=darwin-sequoia-m1 [+] py-six@1.16.0%apple-clang@16.0.0 build_system=python_pip arch=darwin-sequoia-m1 [+] ^py-pip@23.1.2%apple-clang@16.0.0 build_system=generic arch=darwin-sequoia-m1 [+] ^py-setuptools@69.2.0%apple-clang@16.0.0 build_system=generic arch=darwin-sequoia-m1 [-] ^py-wheel@0.41.2%apple-clang@16.0.0 build_system=generic arch=darwin-sequoia-m1 ... ``` With no input spec displayed -- just the concretization output shown as one consolidated tree and multiple roots. - [x] remove "Input Spec" section and "Concretized" header from `spack spec` output - [x] print concretized specs as one BFS tree instead of multiple --------- Signed-off-by: Todd Gamblin <tgamblin@llnl.gov> Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Automatic splicing say `Spec` grow a `__len__` method but it's only used in one place and it's not clear the semantics are useful elsewhere. It also runs the risk of Specs one day being confused for other types of containers. Rather than introduce a new function for one algorithm, let's use a more specific method in the splice code. - [x] Use topological ordering in `_resolve_automatic_splices` instead of sorting by node count - [x] delete `Spec.__len__()` and `Spec.__bool__()` --------- Signed-off-by: Todd Gamblin <tgamblin@llnl.gov> Co-authored-by: Greg Becker <becker33@llnl.gov> Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
We often want to know what the solver is thinking about in order to optimize it. We can generate a profile of which atoms are most frequently propagated and/or undone by the solver using a custom propagator class. This adds a `spack.solver.profiler` module with a simple propagator that counts `undo` and `propagate` operations and associates them with atoms in the solve. To simplify things, rather than enumerating *every* atom in the logic program, we coarsen to show only function names for everything but `attr()`, and we show the first argument of any `attr()` atom (e.g. `attr("node")`). `spack solve --profile` will print the top 40 most propagated atoms in the program, along with how many times they are undone (these are usually but not always similar). The list can give people working with the concretizer a rough idea of where the solver is spending its time, and hopefully point to areas of the program to optimize. This is based on some code that Tom Scogland wrote a year or so ago. I did the work of integrating it as an option so that I do not have to keep looking it up and running this from the CLI :). Co-authored-by: Tom Scogland <scogland1@llnl.gov> Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
d20d1ed
to
75e5994
Compare
We often want to know what the solver is thinking about in order to optimize it. We can generate a profile of which atoms are most frequently propagated and/or undone by the solver using a custom propagator class.
This adds a
spack.solver.profiler
module with a simple propagator that countsundo
andpropagate
operations and associates them with atoms in the solve. To simplify things, rather than enumerating every atom in the logic program, we coarsen to show only function names for everything butattr()
, and we show the first argument of anyattr()
atom (e.g.attr("node")
).spack solve --profile
will print the top 40 most propagated atoms in the program, along with how many times they are undone (these are usually but not always similar). The list can give people working with the concretizer a rough idea of where the solver is spending its time, and hopefully point to areas of the program to optimize.This is based on some code that Tom Scogland wrote a year or so ago. I did the work of integrating it as an option so that I do not have to keep looking it up and running this from the CLI :).
Sample output for ascent: