Optimal Threshold Policies for Robust Data Center Control

Expand
  • (1. SYSU-CMU Joint Institute of Engineering, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510275, China; 2. SYSU-CMU Joint Research Institute, Shunde 528300, Guangdong, China; 3. Department of Electrical & Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA)

Online published: 2018-02-01

Abstract

With the simultaneous rise of energy costs and demand for cloud computing, efficient control of data centers becomes crucial. In the data center control problem, one needs to plan at every time step how many servers to switch on or off in order to meet stochastic job arrivals while trying to minimize electricity consumption. This problem becomes particularly challenging when servers can be of various types and jobs from different classes can only be served by certain types of server, as it is often the case in real data centers. We model this problem as a robust Markov decision process (i.e., the transition function is not assumed to be known precisely). We give sufficient conditions (which seem to be reasonable and satisfied in practice) guaranteeing that an optimal threshold policy exists. This property can then be exploited in the design of an efficient solving method, which we provide. Finally, we present some experimental results demonstrating the practicability of our approach and compare with a previous related approach based on model predictive control.

Cite this article

WENG Paul1,2*, QIU Zeqi3 (邱泽麒), COSTANZO John3, YIN Xiaoqi3 (阴小骐), SINOPOLI Bruno3 . Optimal Threshold Policies for Robust Data Center Control[J]. Journal of Shanghai Jiaotong University(Science), 2018 , 23(1) : 52 -60 . DOI: 10.1007/s12204-018-1909-x

References

[1] KAPLAN J M, FOREST W, KINDLER N. Revolutionizingdata center energy efficiency [R]. Chicago:McKinsey & Company, 2008. [2] MASTROLEON L, BAMBOS N, KOZYRAKIS C, etal. Automatic power management schemes for Internetservers and data centers [C]//Global TelecommunicationsConference, 2005. Missouri: IEEE, 2005: 1-5. [3] PAROLINI L, SINOPOLI B, KROGH B H. Reducingdata center energy consumption via coordinatedcooling and load management [C]//Proceedings of theConference on Power Aware Computing and Systems.[s.l.]: USENIX Association, 2008: 1-5. [4] PAROLINI L, SINOPOLI B, KROGH B H, et al. Acyber-physical systems approach to data center modelingand control for energy efficiency [J]. Proceedingsof the IEEE, 2011, 100(1): 254-268. [5] YIN X, SINOPOLI B. Adaptive robust optimizationfor coordinated capacity and load control in data centers[C]// the International Conference on Decisionand Control. [s.l.]: IEEE, 2014: 5674-5679. [6] FEINBERG E, ZHANG X. Optimizing cloud utilizationvia switching decisions [J]. Performance EvaluationReview, 2014, 41(4): 57-60. [7] LUBIN B, KEPHART J O, DAS R, et al. Expressivepower-based resource allocation for data centers[C]//International Joint Conference on Artificial Intelligence.Pasadena, California: Morgan KaufmannPublishers Inc., 2009: 1451-1456. [8] BOD’IK P. Automating datacenter operations usingmachine learning [D]. Berkeley: University of California,Berkeley, 2010. [9] GAO J. Machine learning applications for data centeroptimization [EB/OL]. (2014-10-27) [2017-09-25].https://www.cse.iitk.ac.in/users/cs300/2014/home/~ratnesh/cs300A/techpaper-review/5A.pdf. [10] GHASEMI M, LUBIN B. Modeling multi-attributedemand for sustainable cloud computing with copulae[C]//International Conference on Artificial Intelligence.Buenos Aires, Argentina: AAAI Press, 2015:2596-2602. [11] HORDIJK A, SCHOUTEN F V D D. On the optimalityof (s, S)-policies in continuous review inventorymodels [J]. SIAM Journal on Applied Mathematics,1986, 46(5): 912-929. [12] KALIN D. On the optimality of ( , s)-policies [J].Mathematics of Operations Research, 1980, 5(2): 293-307. [13] HYON E, JEAN-MARIE A. Scheduling services in aqueueing system with impatience and setup costs [J].The Computer Journal, 2012, 55(5): 553-563. [14] FOX M, LONG D, MAGAZZENI D. Automatic constructionof efficient multiple battery usage policies[C]//International Joint Conference on Artificial Intelligence.Freiburg: AAAI Press, 2011: 2620-2625. [15] PETRIK M, WU X. Optimal threshold control forenergy arbitrage with degradable battery storage[C]//Conference on Uncertainty in Artificial Intelligence.Amsterdam: AUAI Press, 2015: 692-701. [16] ERSEGHE T, ZANELLA A, CODEMO C. Markovdecision processes with threshold based piecewise linearoptimal policies [J]. IEEE Wireless CommunicationLetters, 2013, 2(4): 459-462. [17] KOOLE G. Monotonicity in Markov reward and decisionchains: Theory and applications [J]. Foundations& Trends? i Stochastic Systems, 2007, 1(1):1-76. [18] VEINOTT A. Optimal policy for a multi-product, dynamic,nonstationary inventory problem [J]. Managementscience, 1965, 12(3): 206-222. [19] JUAN D C, LI L, PENG H K, et al. Beyond Poisson:Modeling inter-arrival time of requests in a datacenter[C]//Pacific-Asia Conference on Knowledge Discoveryand Data Mining. China, Taiwan: Springer, 2014: 198-209. [20] GIVAN R, LEACH S M, DEAN T. Bounded parameterMarkov decision processes [J]. Artificial Intelligence,2000, 122(1/2): 71-109. [21] NILIM A, GHAOUI L E. Robustness in Markov decisionproblems with uncertain transition matrices[C]//Advances in Neural Information Processing Systems16 (NIPS 2003). Vancouver andWhistler, BritishColumbia: NIPS, 2003: 1-8. [22] PUTERMAN M L. Markov decision processes: Discretestochastic dynamic programming [M]. America:John Wiley & Sons, 1994: 1-353. [23] LITTMAN M L, DEAN T L, KAELBLING L P. Onthe complexity of solving Markov decision problems[C]//11th Conference on Uncertainty in Artificial Intelligence.San Francisco: Morgan Kaufmann Publishers,1995: 394-402. [24] Index of /other/pageviews/2015/ [EB/OL]. (2016-12-30) [2017-09-25]. https://dumps.wikimedia.org/other/pageviews/2015/ [25] SUROVIK D A, SCHEERES D J. Heuristic search andreceding-horizon planning in complex spacecraft orbitdomains [C]//Proceeding of the 25th InternationalJoint Conference on Artificial Intelligence. Jerusalem,Israel: AAAI Press, 2015: 291-295. [26] WIERING M, OTTERLO M V. Reinforcement Learning:State-of-the-Art [M]. Germany: Springer PublishingCompany, Incorporated, 2012: 1-650. [27] HADOUX E, BEYNIER A, WENG P. Solvinghidden-semi-Markov-mode Markov decision problems[C]//International Conference on Scalable UncertaintyManagement. New York: Springer-Verlag. 2014: 176-189.
Options
Outlines

/