How does server virtualization make a data center more efficient?
- By decreasing server utilization rates.
- By increasing server utilization rates
- By reducing storage requirements
- It does not make a data center more efficient
EXPLANATION
Here are the 10 tips for achieving the same efficiencies:1. Measure to control
“You can’t control what you can’t measure” is an old maxim of operational efficiency. We’ve discovered that efforts to reduce power inefficiencies need to begin with baseline measurements. If you don’t know where your power is going, you can’t know where to focus your attention. To help measure our power consumption, we break it down into each of these categories:
• IT Systems
• UPS
• Chillers
• Lighting
2. Virtualize and consolidate IT systems
The EPA estimates that 50% of all data center power usage comes from servers and storage, which makes them logical targets for power savings. A hot trend right now is server virtualization, an effective strategy that produces savings in space, power and cooling.
To realize the full benefits of server virtualization, you need a storage infrastructure that provides pooled networked storage. The same savings that result from server virtualization apply to storage virtualization: fewer, larger storage systems provide more capacity and better utilization, resulting in less space, power and cooling.
By implementing storage and server virtualization, we have moved to a more energy-efficient storage model. We replaced 50 older storage systems with 10 of the latest storage systems and realized the following benefits:
• Our storage rack footprint decreased from 25 racks to 6 racks
• Our power requirements dropped from 329kW to 69kW
• Our air conditioning capacity requirements went down by 94 tons
• The electricity costs to power those systems were reduced by $60,000 per year
3. Manage data
While planning storage virtualization, we conducted an audit of our existing business data. We discovered that 50% of the data we were storing could be eliminated.
The primary way to stop runaway data growth is to stop data proliferation before it starts. The average enterprise disk volume contains potentially millions of duplicate data objects. As these objects are modified, distributed, backed up, and archived, the duplicate data objects are stored repeatedly.
We use several different approaches such as deduplication, cloning and thin provisioning that all work toward the same goal - to reduce unnecessary data.
4. Eliminate overcooling of systems
Cooling systems is one area where IT departments often overspend and miscalculate. Manufacturers usually base their power consumption estimates on running peak loads all the time. Ask yourself, how often do your systems run at peak capacity? The answer is probably almost never. So why would you cool your systems as if they did?
The key is to calculate accurate power loads, which can be tricky. To help us arrive at reasonable power-load estimates, we test equipment in our lab environment before we deploy it in our data center. By conducting these tests, we have determined that reasonable power-load estimates for our systems are 30% to 40% lower than manufacturer estimates. Knowing that, we can monitor rack-by-rack power usage and tune our cooling systems accordingly, which cuts down on the amount of energy wasted from overcooling.
But it doesn’t stop there. We took it one step further by using variable frequency drives on our air handlers. Instead of running our fans at 100% speed all of the time, variable frequency drives vary the speed of the fans depending on what’s actually needed to cool the equipment on a row-by-row basis. With the fans constantly monitoring temperatures and automatically adjusting to increase or reduce fan speeds as needed, we realized exponential energy savings. In fact, a reduction of 50% in fan speeds yields a reduction in power consumption of 87%.
5. Work with physics in data center layout
Here’s a quick physics refresher - hot air rises and cold air falls. The same rules apply in the data center. The old way of cooling up from the floor in data centers with raised floors usually requires extra energy. Instead, we drop a curtain of cold air down the front of the machines. The cold air is then drawn into the machines and exits out the back as hot air, which then rises to the ceiling and is vented outside.
However, the setup of the racks is extremely important since you don’t want to vent the hot exhaust of one machine into the air intake of another. Instead, we place our racks front-to-front and back-to-back. This arrangement, called “hot aisle/cold aisle,” has become a best practice for data center design because of its efficiency.
With our cooling system above the racks, we also eliminate the need for a raised floor (which typically accommodate water pipes that carry cold water and other cooling cables), providing other energy and space efficiencies.
6. Continuously improve heat containment
Wherever you have high-density racks in a hot aisle/cold aisle arrangement, you need additional airflow measures to prevent hot exhaust from getting into the cold aisle. This is where a little low-tech ingenuity comes into play.
While we vent out the hot air that rises, we also implement a low-cost technique that is very effective and important. To isolate the heat we use vinyl curtains - just like the ones you’d see in a meat locker - at the ends of the hot aisles and around the cooling outtake system above the racks. We use the vinyl strips to contain the air in hot aisles and the same vinyl to create a physical barrier around ducts and equipment.
We estimate that the curtains in Sunnyvale alone will save us 1 million kWh per year.
7. Maximize free cooling
Generating cold air doesn’t have to be your only source of cooling. Why not use Mother Nature as well? By using outside air as free cooling, we save $1.5 million a year in energy costs. To do this, dampers are built into the side of our building that automatically modulate the outside air coming in. When the outside air temperature is lower than the established temperature point, the dampers open up and outside air filters into the cooling system. Conversely, when the outside air rises above the temperature set point, the dampers close and the chillers take over.
This effort is a continuous work in progress, though, to fine tune and enhance it. Thanks to our environmental engineers, we are working to raise the temperature set point for which we can use outside air. We originally started with a set point of 52 degrees F and have gradually moved it up to 65 degrees F. We are in the process of raising it even further to 75 degrees F, which could raise our free cooling hours to 85% of the year.
8. Minimize electrical conversion losses
Instead of using a battery-based system, our Sunnyvale data center today features kinetic UPSs that store energy as motion. Energy comes in from our switching infrastructure and spins an electric motor in each of the UPS units. Flywheels store energy to produce 15 to 20 seconds of energy – just enough to carry out any of our switching operations. While older battery UPSs are 85% efficient and today’s best UPSs average roughly 94%, flywheel UPSs, with an efficiency rating of 97.7%, lose less than half the energy that batteries do.
9. Use heat that would be wasted
Power demand and energy prices go hand-in-hand with soaring summer temperatures. During these temperature and electricity peak times, our natural gas-powered cogeneration system goes online to economically power our one-megawatt data center. We benefit in two different ways from this approach.
First, by generating electricity so close to where it is used (known as distributed generation), we lower power costs and reduce the amount of electricity lost in transmission.
The second advantage stems directly from cogeneration. Cogeneration is a thermodynamically efficient use of fuel. It puts to work the large quantities of wasted heat that result from electricity production. In our Sunnyvale data center, we utilize the heat that is produced by our gas-powered generators to power an adsorption chiller that chills the water used in the cooling system. Our cogeneration system has an overall efficiency rating of 75% to 85% and saves us $300,000 annually.
10. Constantly monitor and tune
One more step that you can take to help drive data center efficiency is to accurately and regularly monitor your environment. Most data centers measure load at the perimeter, which, predictably, makes things unpredictable. To truly enable energy efficiency, it’s important to take metering one step further and measure at the rack levels (watts per rack), rather than just cranking up the fans when a particular area in the data center starts to run hot. We constantly test and tune our environment, and our multiple temperature sensors at the mid-level register a 10- to 12-degree differential on average.
Not all data centers are created equal. How you and your company decide to implement a power-efficient data center strategy depends on your specific conditions. However, this list of 10 techniques that NetApp has implemented should give you some good ideas and places to start. If you don’t already know it, I would urge you to determine what your data center Power Usage Effectiveness (PUE) rating is. From there you will be able to develop the appropriate techniques and improvements to set your data center efficiency program in motion.
Laura Pickering is Vice President and Environmental Responsibility Advocate for NetApp.
0 comments:
Post a Comment