Skip to content

Commit

Permalink
add VPP and DPP
Browse files Browse the repository at this point in the history
  • Loading branch information
jiedxu committed May 3, 2020
1 parent 80ef48b commit 5455bb5
Show file tree
Hide file tree
Showing 110 changed files with 1,029 additions and 10,490 deletions.
52 changes: 52 additions & 0 deletions docs/Inter-Elec.Rmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
editor_options:
chunk_output_type: console
# output: bookdown::gitbook
# bibliography: "../web/book.bib"
---

## Virtual Power Plant (VPP)

> VPP is a cluster of dispersed generating units, flexible loads, and storage systems that are grouped in order to operate as a single entity. [@morales2013integrating]
### Operation of VPP: Profit-Maximization Problem


### Location-Aware Flexibility

> Due to the dispersed nature of these resources, there is only one infrastructure “branched” enough to reach all of them: the distribution grid. Consequently, the management of VPPs will also call, among other things, for: [@morales2013integrating]
> 1. The enhancement of the control and monitoring of the distribution network to guarantee the performance, reliability, and security of the electricity supply.
> 2. The modeling, design and test of advanced components acting actively in the grid such as generators, transformers, smart meters, cables, breakers, insulators, power electronics, and converters.
> 3. The development of procedures to identify weaknesses in the distribution grid and propose guidelines for its reinforcement and expansion.
### Cost Allocation Problem


### Franchise Agreements and Cooperative Games

> Cooperative games are often analyzed through the framework of cooperative game theory, which focuses on predicting which coalitions will form, the joint actions that groups take and the resulting collective payoffs. [@wiki:cooperative]

## Dynamic Pricing and Demand Response

> Each retailer is deemed to have sold to its customers the amount of energy that went through their meters. If for any period the aggregate amount over all its customers exceeds the amount that it has contracted to buy, the retailer has to purchase the difference on the spot market at whatever value the spot price reached for that period. Similarly, if the amount contracted exceeds the amount consumed by its customers, the retailer is deemed to have sold the difference on the spot market. [@kirschen2018fundamentals]
> To reduce its exposure to the risk associated with the unpredictability of the spot market prices, a retailer therefore tries to forecast as accurately as possible the demand of its customers. [@kirschen2018fundamentals]
> The decision on energy consumption is ultimately left to the individual consumers, who must weigh cost savings against a potential loss of comfort. [@morales2013integrating]
### Necessary Infrastructure

> In order to evolve from a setup where supply follows demand to one where demand follows supply, power systems must undergo drastic structural and operational changes. [@morales2013integrating]
### Time-of-Use Tariff

> However, their relevance is challenged as the penetration of renewables into power systems grows sufficiently large to be able to influence prices in the wholesale electricity markets. Time-of-use tariffs are static, i.e., they are fixed long time in advance, and therefore unable to adapt to the rapid fluctuations of renewables. [@morales2013integrating]
### Real-Time Dynamic Pricing

Prices are able to adapt dynamically according to the latest forecast of renewable outputs and consumptions.

## Dynamic Procurement & Pricing (DPP)

## Reference
18 changes: 18 additions & 0 deletions docs/MDP-MC.Rmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
editor_options:
chunk_output_type: console
# output: bookdown::gitbook
# bibliography: "../web/book.bib"
---

## Markov Chain

> A Markov process is a stochastic process with the property that, the probability of any particular future behavior of the process, when its current state is known exactly, is not altered by additional knowledge concerning its past behavior. [@pinsky2010introduction]
The stationary transition probability matrix (STPM) or Markov matrix (MM) can be used to describe the behavior of a Markov process.

> A Markov process is completely defined once its transition probability matrix and initial state (or, more generally, the probability distribution of the initial state) are specified. [@pinsky2010introduction]
> Suppose that a transition probability matrix on a finite number of states has the property that when raised to some power `k`, the `k`-step transition probability matrix has all of its elements strictly positive. Such a transition probability matrix, or the corresponding Markov chain, is called regular. The most important fact concerning a regular Markov chain is the existence of a limiting probability distribution, and this distribution is independent of the initial state. [@pinsky2010introduction]
> A transition probability matrix is called doubly stochastic if the columns sum to one as well as the rows. If the matrix is regular, then the unique limiting distribution is the uniform distribution. [@pinsky2010introduction]
2 changes: 2 additions & 0 deletions web/03-Sto.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,5 @@ editor_options:
> Stochastic programming is a framework for modeling optimization problems that involve uncertainty, if the probability distributions governing the data are known or can be estimated. The goal here is to find some policy that is feasible for all (or almost all) the possible data instances and maximizes the expectation of some function of the decisions and the random variables.
> Stochastic dynamic programming deals with problems in which the current period reward and/or the next period state are random, i.e. with multi-stage stochastic systems. The decision maker's goal is to maximize expected (discounted) reward over a given planning horizon.
> Decision outcomes need to be characterized not only by their expected values but also by their variability levels, thus risk control of outcome volatility is needed and can be achieved using appropriate risk measures. [@conejo2010decision]
13 changes: 0 additions & 13 deletions web/05-MDP.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,3 @@ editor_options:
# Markov Decision Process (MDP) {#MDP}

Markov chain is a special kind of stochastic processes, which can be used to model lots of phenomena. Usually, to describe the system is not enough, because the managers or engineers may want to optimize the system performance. So Markov decision processes can be used to include the decisions to be made and formulate the optimization problem.

## 1, Markov Chain

> A Markov process is a stochastic process with the property that, the probability of any particular future behavior of the process, when its current state is known exactly, is not altered by additional knowledge concerning its past behavior. [@pinsky2010introduction]
The stationary transition probability matrix (STPM) or Markov matrix (MM) can be used to describe the behavior of a Markov process.

> A Markov process is completely defined once its transition probability matrix and initial state (or, more generally, the probability distribution of the initial state) are specified. [@pinsky2010introduction]
> Suppose that a transition probability matrix on a finite number of states has the property that when raised to some power `k`, the `k`-step transition probability matrix has all of its elements strictly positive. Such a transition probability matrix, or the corresponding Markov chain, is called regular. The most important fact concerning a regular Markov chain is the existence of a limiting probability distribution, and this distribution is independent of the initial state. [@pinsky2010introduction]
> A transition probability matrix is called doubly stochastic if the columns sum to one as well as the rows. If the matrix is regular, then the unique limiting distribution is the uniform distribution. [@pinsky2010introduction]
11 changes: 11 additions & 0 deletions web/07-Inter.Rmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
editor_options:
chunk_output_type: console
---

# (PART) Applications {-}

# Intermediation below Commodity Markets

```{r child = '../docs/Inter-Elec.Rmd'}
```
8 changes: 0 additions & 8 deletions web/07-retailing.Rmd

This file was deleted.

Binary file removed web/MatrixOptim
Binary file not shown.
Binary file removed web/MatrixOptim.rds
Binary file not shown.
9 changes: 0 additions & 9 deletions web/_book/00-Intro.md

This file was deleted.

Loading

0 comments on commit 5455bb5

Please sign in to comment.