Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Equations skipped at inlets in 1-D control volumes #424

Open
Robbybp opened this issue Jul 15, 2021 · 6 comments
Open

Equations skipped at inlets in 1-D control volumes #424

Robbybp opened this issue Jul 15, 2021 · 6 comments
Assignees
Labels
core Issues dealing with core modeling components enhancement New feature or request Priority:Low Low Priority Issue or PR

Comments

@Robbybp
Copy link
Member

Robbybp commented Jul 15, 2021

In 1-D control volumes, and some property package state blocks, equations are often skipped at the inlet (or points where defined_state is True in state blocks). This causes some inconveniences/problems when attempting to treat a 1-D control volume as a spatial DAE. My proposal is to not skip (either implicitly, or via Constraint.Skip) any equations at a particular index of a continuous set. If an equation's presence at a particular index leads an over-determined model, as with sum equations when defined_state is True, then I propose that this equation be deactivated, rather than omitted.

Problems I've encountered with skipped equations at inlets include:

  1. Fixing differential variables at their boundary conditions instead of fixing state variables leads to an underdetermined model. My workaround is to manually add the sum equation. In my proposed solution, I would just have to activate it.
  2. Multigrid initializations require a value at every space index for every space-indexed variable. With certain equations, like rate generation stoichiometry equations and mass balances, skipped at inlets, these variables do not participate in expressions and therefore do not receive values when the model is solved. This leads to invalid values being used to interpolate results to a finer grid.
  3. Decomposition strategies that require a partition of variables into differential and algebraic need special handling at inlets, as the set of variables that are algebraic can change at these points.
@Robbybp Robbybp added the enhancement New feature or request label Jul 15, 2021
@andrewlee94
Copy link
Member

To record some of the things that were discussed at the Core Dev meeting:

  1. From the physical modeling perspective, these points are generally excluded as these are conditions at the boundary and thus do not have a "physical" meaning. Due to this, most modelers traditionally expect these points not to exist, so we need to make sure that having a value here doesn't add confusion or create problems for them.
  2. The first issue that comes to mind is plotting of profiles - most modelers do not expect to get a value at the boundary, and this value may not be consistent with the rest of the model. We don't want plots to show up with sudden unexplained jumps at the boundaries just because we gave a variable a value at the boundary.
  3. One of the proposed uses of this boundary value was for interpolating onto finer grids, however this requires that the boundary value be at least consistent with the rest of the model (if not physically meaningful). If we are going to do this, then we need to be able to ensure this condition is met, which is likely to be difficult due to the highly non-linear nature of models near the boundary. It might be more useful to extrapolate from points 1 and 2 than to interpolate between 0 and 1.

@andrewlee94
Copy link
Member

Additional thoughts:

  1. For rate reactions in a single phase system, calculating the generation base on the inlet conditions is probably a good approximation of the generation at 0+. How this might work in a counter-flow system might need a bit more thought, but those are fairly rare cases (and not necessarily supported in IDAES).
  2. Generation for equilibrium reactions might be similar, but they behave differently and can change quickly.
  3. Mass transfer terms really depend on some knowledge of the system. I.e. maybe they should be extrapolated from downstream points. Either way, this would need to be a fixed variable anyway, so we would need some way to come up with a boundary value for this if we want values for all variables at boundaries.
  4. Q, W and deltaP terms are similar, and need to come from the unit model.

Subject to having good values for those variables, then we can probably say that using the material, energy and pressure balances would at least give a consistent, if not meaningful, value at the boundary.

@ksbeattie ksbeattie added the Priority:Normal Normal Priority Issue or PR label Jul 15, 2021
@Robbybp
Copy link
Member Author

Robbybp commented Jul 15, 2021

1. From the physical modeling perspective, these points are generally excluded as these are conditions at the boundary and thus do not have a "physical" meaning.  Due to this, most modelers traditionally expect these points not to exist, so we need to make sure that having a value here doesn't add confusion or create problems for them.

I think I disagree with this. If we have values for state variables and flow rate, then the values for all other variables, except possibly derivatives, have a physical meaning. And this may not be typical, but when I have a constraint indexed by a set, I expect it to have an expression for every element of that set. The only instance I would expect otherwise is if the expression is not well-defined (e.g. discretization equations). If we think they lead to confusing values, I think deactivating them is sufficient.

2. The first issue that comes to mind is plotting of profiles - most modelers do not expect to get a value at the boundary, and this value may not be consistent with the rest of the model. We don't want plots to show up with sudden unexplained jumps at the boundaries just because we gave a variable a value at the boundary.

I'm sure this can happen, but if the state variables are "smooth" along our length domain, then all other variables should be as well, right? "Property variables" (or similarly, algebraic variables) are continuous functions of the state variables, and derivatives are continuous functions of the state variables and property variables, right? If we add variables that are independent of the rest of our model, we should expect jump. If there are cases where what I'm proposing leads to this, then these independent equations/variables should remain inactive/stale.

3. One of the proposed uses of this boundary value was for interpolating onto finer grids, however this requires that the boundary value be at least consistent with the rest of the model (if not physically meaningful). If we are going to do this, then we need to be able to ensure this condition is met, which is likely to be difficult due to the highly non-linear nature of models near the boundary. It might be more useful to extrapolate from points 1 and 2 than to interpolate between 0 and 1.

I think I agree that it would probably be better to extrapolate towards the boundary than to interpolate with the boundary value. But that sounds hard to implement. I am interested in getting a quick result from a numpy regular grid interpolation function, and I think that values obtained using our model equations (if they existed) would be more useful than default values or None. Also, if our model variables are continuous functions of the state variables, I don't really see a fundamental problem with interpolating.

1. For rate reactions in a single phase system, calculating the generation base on the inlet conditions is probably a good approximation of the generation at 0+. How this might work in a counter-flow system might need a bit more thought, but those are fairly rare cases (and not necessarily supported in IDAES).

I don't think this changes for a counter-flow system. If the state variables of phase 1 at 0 are a good approximation of those at 0+, and the state variables of phase 2 at 0 are a good approximation of those at 0+, then the reaction rate at 0 will be a good approximation of that at 0+.

3. Mass transfer terms really depend on some knowledge of the system. I.e. maybe they should be extrapolated from downstream points. Either way, this would need to be a fixed variable anyway, so we would need some way to come up with a boundary value for this if we want values for all variables at boundaries.

I don't see why mass transfer terms would need to be fixed. Are they not a function of the state variables?

I am actually slightly less concerned about having balance equations (differential equations) than about having all the algebraic equations present at the boundaries. This is because I would like the set of algebraic variables and equations to be the same at every point in space. The alternative makes it much more difficult to check that a DAE model is index-1, or to exploit index-1 DAE structure in a solver/simulator. I believe consistent structure of our models across continuous sets will also be necessary to interface to external tools for DAEs/PDEs.

@andrewlee94
Copy link
Member

  1. To give an example, what does the reaction generation term mean at the boundary? This starts getting at the fundamental assumptions we make as modelers - i.e. that nothing happens outside the boundaries of the unit model. In doing this, we effectively assume that the boundary points are not part of the reactor volume - they represent hard boundaries at the ends of the volume at which point no reactions, heat or mass transfer take place. I.e. we implicitly assume that all these terms are 0, which in turn means the balance equations at this point are trivial - dq/dx == 0. Hence, we do not assign any physical meaning to the values at the boundary.

  2. They might be smooth (and continuously differentiable too), but they are not necessarily monotonic or linear (in fact they almost always are not). Hence, you have to be very careful when interpolating near boundaries.

  3. This stems from the above point. Extrapolation is likely to be safer in most ways.

  4. It might not - I just didn't have time to think through all the implications.

  5. Mass transfer terms are calculated at the unit model level, and may be simply fixed by the user. At least at the control volume level (where the balances are written), we have no idea on how they will behave. I tend to limit myself to thinking at the control volume level, as it is well-defined; I have no idea what any given unit model may choose to do.

@Robbybp
Copy link
Member Author

Robbybp commented Jul 22, 2021

  1. The reaction generation term at a boundary is just the generation due to the reaction rate that is calculated from the state variable at that boundary. I think all of these quantities are well-defined at the boundaries. If some of these are only well-defined in the interior of the reactor, then these values at the boundaries represent the limiting behavior as we approach the boundary. I don't think I agree with implicitly assuming that anything is zero at the boundary. If it is zero due to a boundary condition, that's fine, but otherwise all these terms should be the same continuous function of the state variables that they are at every other point in space. May be worth mentioning that we currently don't skip equations at all boundaries, just at inlets.
  2. Agreed. Technically this is true everywhere, right? Not just at the boundaries?
  3. I think this is hard to say in general.
  4. Okay, if mass transfer terms are handled by the user, then we can assume whatever they do is right.

@andrewlee94
Copy link
Member

  1. Whilst it might not be physically true, that is the meaning that is traditionally assigned at the boundary. In many ways, this is just a convention, but it is one that a lot of modelers follow. Thus, deviating from that convention may cause confusion.
  2. Yes, if is true anywhere, but boundaries tend to be significantly more non-linear than elsewhere - hence the need for finer grids near boundaries. Thus, we should be very careful about interpolating from the boundary.
  3. You are correct that extrapolation can be just as bad. The problem is it is hard to know either way, which again is why we want to fill in those finer meshes.

@andrewlee94 andrewlee94 added Priority:Low Low Priority Issue or PR and removed Priority:Normal Normal Priority Issue or PR labels Apr 21, 2022
@andrewlee94 andrewlee94 moved this to In Progress in 2023 February Release Nov 8, 2022
@andrewlee94 andrewlee94 added Priority:Normal Normal Priority Issue or PR core Issues dealing with core modeling components and removed Priority:Low Low Priority Issue or PR labels Nov 8, 2022
@andrewlee94 andrewlee94 added Priority:Low Low Priority Issue or PR and removed Priority:Normal Normal Priority Issue or PR labels Feb 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core Issues dealing with core modeling components enhancement New feature or request Priority:Low Low Priority Issue or PR
Projects
None yet
Development

No branches or pull requests

4 participants