-
Notifications
You must be signed in to change notification settings - Fork 526
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extend flattener to allow multiple sets #1768
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1768 +/- ##
==========================================
- Coverage 75.14% 74.96% -0.18%
==========================================
Files 638 638
Lines 91863 94318 +2455
==========================================
+ Hits 69027 70706 +1679
- Misses 22836 23612 +776
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few minor comments but otherwise I think this looks fine.
#for sets, comps in zip(sets_list, comps_list): | ||
# if len(sets) == 1 and sets[0] is m.d2: | ||
# ref_data = { | ||
# self._hashRef(Reference(m.v_2n[:,('a',1)])), | ||
# self._hashRef(Reference(m.v_2n[:,('b',2)])), | ||
# } | ||
# # Cannot access the data of ^these references | ||
# for comp in comps: | ||
# self.assertIn(self._hashRef(comp), ref_data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this commented code be removed? At a higher level, is this test complete and doing what you expect? The comments seem to indicate that there may have been some issues debugging it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is basically not doing what is expected. The line
self.assertNotIn(('a', 1), ref1._index._ref)
confirms this. ('a', 1)
should be a valid index. This is a problem with compatibility between slices and normalize_index.flatten == False
. Basically a one-dimensional slice does not properly slice a higher-dimension set in this case. I'm not sure there is an easy fix. The commented code is what I would have expected to work.
Probably should just open an issue about this then remove this test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the clarification. I think the best thing to do is to open an issue and then add a "TODO" comment in this test that references the issue. I think leaving the commented test for future reference is fine as long as we can follow the context of why it's there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added comment and opened issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this looks good (up do some questions on naming / documentation).
I do have a side question for you: how challenging do you think it would be it be to add a wrapper object around ReferenceDict to support reordering the indices of a reference? That way you wouldn't have to create separate results for [I, J, I]
and [I, I, J]
. I suspect we could do that pretty easily (in a separate PR).
This may be a larger can of worms than I should open right now, but I'm not sure I like the name This functionality seems closer to a partition of component data objects into containers that preserve indexing by the specified sets. Any thoughts on the name |
I think this does move from a higher dimension down to a lower one, in that all the index sets other than the ones you are flattening along disappear. |
I usually think of flattening as going from a single 2d array to a single 1d array. Here we are going from high-dimensional objects to many low-dimensional objects. I suppose it still is flattening but it seems to be closer to some sort of partition. It feels like cheating to say we "flattened" an object when really we broke it up enough that we could call each piece flat. |
After a little more thought, I think we are actually doing three things here. First we split components by eliminating any sets we don't care about, second we partition these components according to what sets they're still indexed by, and third we create references that allow the user to access the new components directly. The first and second steps can be thought of as partitions, the third as flattening. I think the entire procedure can be thought of as viewing components "from the perspective of" the specified sets. Not sure how much this helps naming. Best I can come up with is something like |
I suggest we keep the current name for now. This is a fairly advanced and specific tool and I think expanding the documentation to include the purpose and intended use of this functionality will be more helpful to "future-us" than trying to find a more precise name at this point. |
Summary/Motivation:
The current
pyomo.dae.flatten
module has a functionflatten_dae_components
that partitions indexed components into new (reference) components, indexed only (if at all) by a specified set. This PR extends this functionality by allowing multiple sets to be specified. The returned components may be indexed by any or all of the specified sets. This is useful especially for PDE modeling/analysis, where a unique "variable" is defined as something indexed by e.g. time and space.Changes proposed in this PR:
flatten_components_along_sets
that implements this partitionflatten_dae_components
as a special case offlatten_components_along_sets
Legal Acknowledgement
By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution: