Fonre (Source): Consulting – Specifying Engineer
Por (By): Scott Gatewood, PE, DLR Group, Omaha, Neb.
Acesse aqui a matéria em sua fonte / Click here to read this article from its source.
Evolving technologies have developed into best practices to create secure, reliable, highly available, and adaptable data center spaces.
- Explore the history, trends, and best practices that enhance data center design.
- Explain how to balance power usage effectiveness (PUE) and electrical efficiency with reliability when designing data center electrical distribution systems.
- Describe how to ensure that data center electrical systems are safe as well as reliable.
Designing increasingly efficient and reliable data centers continues to be a high priority for consulting engineers. From the continuity of business and government operations to the recent rise in new cloud services and outsourcing, the increasing demands on Internet service continually places strains on design, energy consumption, and the operators who make these facilities run. While designing to incorporate state-of-the-art systems and equipment, we must not forget the functional needs of the data center operators and facilities staff.
Increasing Internet demands have strained server and storage capacity. In response, the number of servers needed continues to grow exponentially—even after the 2008 server virtualization revolution. To meet the infrastructure demands this has created, new data center power and cooling designs are providing expanded capacity and increased efficiency in every part of the design. Driven largely by the economics of energy lifecycle costs and the environmental realization of the vast amount of power that data centers require, inspirations to innovate have emerged across the electrical and information technology (IT) equipment space.
Efficiency has always been a requirement of integrated design. It could be summarized as an iterative process of balancing architecture and engineering responses to the natural environment the facility shares. The facility’s geographic location, orientation, and exposures to that environment play critical roles in the thermal exchanges that occur. This informs engineers regarding materials used for those exposures to efficiently balance mechanical and electrical designs. Efficient, nondestructive scaling is key. The infrastructure and systems within must be considered for capacity planning to serve the initial data center IT space efficiently. Care must be taken to create efficient operations at the initial low-demand levels and the ability to scale to future higher demands with minimal capital in oversizing and with no future disruptions.
Energy efficiency is important, but it does not complete the picture of design. Facilities also must provoke a human response. The perception of beauty, proportion, and style that inspires emotion is critical. Design is more than energy and performance. Integrated design—done well—produces an emotional response. We experience this when looking at a stylish car. The balance of form and function, combined with the place the facility shares in the environment, illustrate integrated design. Efficiency is only a part of the equation, but is the key to operational effectiveness and energy cost control over the life of the facility, just as engine performance and gas mileage are to a stylish ride. Here, the engineer can have a great impact on the economic and environmental concerns that support the business of data center operations.
The first data center energy efficiency metrics were summarized by the Green Grid. Founded in 2007 by the who’s who of data center IT equipment manufacturers, the Green Grid summarized power use effectiveness (PUE) in an equation that today remains a simple ratio showing how a data center uses mechanical and electrical infrastructure energy effectively.
PUE = total facility energy/IT equipment energy
The equation provides a simple way to compare the ideal data center PUE of 1.0 to the actual percentage associated with the electrical and mechanical systems needed. Total facility energy must contain all power needed to support the data center environment and the IT equipment within the data center. The resulting PUE, averaged over an annual basis, reflects the percentage above 1.0 required for non-IT equipment. For example, a PUE of 1.5 shows that in addition to the direct energy needed to operate servers, the network, storage, etc., the data center requires 50% more energy to support that equipment.
The arrival of PUE allows comparisons and competitions among data center designs located in similar climates and helps establish best practice design responses in similar climate zones. For example, a data center in the Iceland climate would compare poorly to an identical data center in South Florida. Mechanical cooling energy transfer into and out of the data center is strongly influenced by the environment the data center shares and the systems deployed.
Although PUE does not capture the IT hardware deployment efficiency (i.e., percentage virtualized, percentage used, etc.), it does normalize the result to reveal how well the electrical and largely the mechanical engineering response maintains the data center environment while lowering its impact on the natural environment.
PUE is only a measurement method. Many codes and standards have emerged over the last 10 years to specifically address data centers. ASHRAE has developed a comprehensive practical engineering response focused specifically on the uniqueness of data center environments. Efficient data centers and supporting space-engineering practices, tactics, and requirements are framed in ASHRAE’s TC9.9, Datacom Series Guidelines, and recent updates to ASHRAE 90.1: Energy Standard for Buildings Except Low-Rise Residential Buildings.
Electrical system efficiency strives to minimize voltage and current conversion losses. The impedance in transformers, uninterruptible power supplies (UPSs), power supplies, lighting, mechanical equipment, and the wiring plant—combined with controls—affect electrical efficiency opportunities. Higher voltages to the rack, UPS bypass or interactive modes, and switch-mode power supplies form the heart of electrical energy advances. The use of transformers optimized to achieve efficient low-loss performance at lower loads (30% and above) have emerged as a mainstay. Increasingly, these transformers also deliver higher voltages (240 Vac) to the rack, which lowers IT equipment switch-mode power supply energy losses. Perhaps UPS systems have seen the most attention with improved conversion technologies and even line interactive operation mode. In the past, line interactive mode would have been considered risky.
With the equipment winding efficiency gains from transformers and motors, engineers must pay special attention to available fault current or available interrupting current (AIC) management. Higher efficiencies result in larger available fault current and, consequently, elevated arc flash hazards if not managed. NFPA 70E: Standard for Electrical Safety in the Workplace and contractor risk agents place safety above business continuity. Hence, minimizing AIC energy at the power distribution units is important to risk management within the data center space. Consideration should be given to current-limiting circuit breakers within the UPS distribution to lower fault energy and for selective coordination throughout the power chain. These efforts are a small undertaking for operational savings and enhanced safety.
To address the most common and often most physically destructive fault condition, ground faults, and simultaneously maintain the highest degree of availability, engineers must consider pushing ground fault interruption further into the distribution. By using ground fault detection and interruption to isolated individual main distribution segments, main breakers can be engaged at different fault conditions. Avoiding main breaker ground fault interruption should be a priority. Main switchgear provided with optic fault detection and current reduction circuitry—a relative newcomer to selective coordination—can isolate faults to switchgear compartments. Not to be disregarded, engineers may employ a high-resistance grounding design that allows ground faults to be sustained at lower energies until the location can be identified. Each approach comes with benefits and compromises that engineers must evaluate based on the electrical distribution strategy employed.
Electrical engineers also must pay close attention to the site’s soil conductivity when significant power conductors are located underground or under slabs. Energy losses from continuous high load factors require careful analysis to accurately size these underground feeders for the heating effects unique to the data center’s continuous loads. Following the analysis of load factors, concrete encasement, feeder oversizing, and spreading duct banks are to be expected to reduce the heating effects that data center load profiles create. In addition, soil reports allow accurate grounding calculations and identify the water table depth. Grounds that can reach the water table are very beneficial because of their low impedance.
New cloud-based data centers have pressed ever-higher power densities and load factors, which create a strong undertow for efficiency. To achieve 10 to 30 kW (or more) per rack load, designs may require the addition of cooling liquids to the rack, closely coupled redundant cooling, and thermal storage systems. Engineers must balance PUE and electrical efficiency with availability when designing data center electrical distribution systems that are in close proximity to water and other cooling liquids. The unique demands of creating an always-on and serviceable design that acknowledges failure potential—even those in response to water leaks at the rack level—are very critical. How operators will service an event without an outage remains a serious component in every design response.