A MULTI-AGENT DEMAND RESPONSE PLANNING AND OPERATIONAL OPTIMIZATION FRAMEWORK
Degree granted by The University of Texas at Arlington
Abstract
This research describes a real-time optimization model for multi-agent demand response (DR) from a Load Serving Entity (LSE) perspective. We formulate two infinite horizon stochastic optimization models; specifically, an LSE model and a dynamic pricing customer model. The objective of these models is to minimize long-term cost and discomfort penalty of the LSE and dynamic pricing customers. We solve a deterministic finite horizon linear program as an approximation of the suggested stochastic model and provide computational experiments. In stochastic programming (SP), a wait-and-see solution is at least as good as an optimal policy. On the other hand, a policy that uses the expected value problem is never as good as an optimal policy. This is well established in SP when there is a single agent. A question arises whether bounds exist when we have two agents. The present study develops a research methodology to answer this question. Our experiments show that if we have two separate agents, and both agents get perfect information, this can be worse compared to both agents doing the mean value problem. Nevertheless, we have found that there are bounds when the first stage follows the same set of actions. A two-agent demand response problem has been used as a case study to show this claim.