Guest Editors' Introduction: Special Issue on Green and Energy-Efficient Cloud Computing: Part I
Ricardo Bianchini, Samee U. Khan, Carlo Mastroianni
IEEE Transactions on Cloud Computing
C LOUD Computing has had a huge commercial impact and has attracted the interest of the research community. Public clouds allow their customers to outsource the management of physical resources, and rent a variable amount of resources in accordance to their specific needs. Private clouds allow companies to manage on-premises resources, exploiting the capabilities offered by the cloud technologies, such as using virtualization to improve resource utilization and cloud software for resource
... ment automation. Hybrid clouds, where private infrastructures are integrated and complemented by external resources, are becoming a common scenario as well, for example to manage load peaks. Cloud applications are hosted by data centers whose size ranges from tens to tens of thousands of servers, which raises significant challenges related to energy and cost management. It has been estimated that the Information and Communication Technology (ICT) industry alone is responsible for 2-3 percent of the global greenhouse gas emissions. Therefore, we must find innovative methods and tools to manage the energy efficiency and carbon footprint of data centers, so that they can operate and scale in a cost-effective and environmentally sustainable manner. These methods and tools are often categorized as Data Center Infrastructure Management (DCIM) to monitor, control, and optimize data centers with extensive automation. DCIM must also effectively manage the quality of service provided by the data center, since cloud customers require high reliability, availability, usability, and low response times. While significant advancements have been made to increase the physical efficiency of power supplies and cooling components that improve the Power Usage Effectiveness (PUE) index, such improvements are often circumscribed to the huge data centers run by large cloud companies. Even stronger effort is needed to improve the data center computational efficiency, as servers are today highly underutilized, with typical operating range between 10 and 30 percent. In this respect, advancements are needed both to improve the energy-efficiency of servers and to dynamically consolidate the workload on fewer, and better utilized, servers. This special issue has offered the scientific and industrial communities a forum to present new research, development, and deployment efforts in the field of green and energy-efficient Cloud Computing. Indeed, the special issue attracted a large number of good quality papers. After two or, in some cases, three rounds of reviews-each involving at least three expert reviewers-18 papers have been selected for publication among the 44 initially submitted. The accepted papers have been split into two issues of this journal. The present issue includes nine papers that focus on the opportunities offered by the modern virtualization technology for reducing energy consumption and carbon emissions, through techniques and methods that aim to achieve optimal allocation and scheduling of virtual machines (VMs), both in single platforms and in geographically distributed scenarios involving multiple data centers. A forthcoming issue will include nine papers that are more specifically devoted to the efficient management of the physical infrastructure of data centers and cloud facilities. K. Li in  develops a queuing model for a multicore system with workload-dependent dynamic power management. The author derives for that model the necessary and sufficient conditions that allow the average task response time to be minimized. The most impressive result of this work is the establishment that the power consumption reduction, constrained by the performance guarantees, can be studied in a similar way as performance improvement (average task response time reduction) subject to power constraints. The work also reports several speed schemes to demonstrate the fact that for the same average power consumption, it is possible to design a multicore processor so that the average task response time is shorter than a multicore processor with uniform clock rate. In  , an eco-aware approach is presented that relies on the definition, monitoring and utilization of energy and CO 2 metrics combined with the use of innovative application scheduling and runtime adaptation techniques. The aim is to optimize energy consumption and CO 2 footprint of cloud applications as well as the underlying infrastructure.