Thursday, November 11, 2010

Icelandic Data Centers - from my Iceland 101 Blog

Gabe Cole
101Iceland: Rethinking Icelandic Data Center Opportunities

Wednesday, November 3, 2010

Latency and Data Center Siting

Today, there is much emphasis on the ability of cloud services to be located in mammoth data centers ideally suited to maximize low cost power and climatic or environmentally favorable cooling methodologies.

So, why don't we move everything to such facilities? There are a number of reasons:
  • Companies whose system require care and feeding by a staff in a specific geographic location.
  • Privacy laws that require the data be kept within certain boundaries.
  • Disaster recovery and accessibility concerns.
  • Physical latency which is the time that it takes to communicate with the facilities.
Today, I am going to address the issue of latency, which at its core is limited by physical laws of nature.

Latency is the amount of time, generally measured in milliseconds (ms) which represent 1/1000 of a second. It is generally expressed as "round trip latency", or the time it takes the information to make it from point a to point b and back to point a. This is in part because of the traditional tools (i.e., "ping") used to measure latency and also to simulate the effective time to make and confirm a transaction.

Round trip latency between New York and London is about 100 ms, and New York to Los Angeles is about 70 ms.

This delay is important in that it impacts the performance and business aspects of many transactions. Program trading operations, for example, want to minimize latency as much as possible and, as such, want to locate within 20 miles of the "trading" center and have latency that is below 2 ms. Many other applications perform best with latencies that are less than 20 ms, implying a distance that is less than 700 miles.

The tolerance of applications is increasing and many have become virtually latency insensitive. It is this class of applications that can easily be moved to a data center optimally located for power and cooling.

The more sophisticated IT departments now segregate their applications by latency, as well as other issues. This allows them to group what must be done at local vs remote data centers, and may include several different gradations along the way - such as one data center within 20 miles, one within 250 miles, one within 500 miles, etc.

Friday, October 22, 2010

Does the Equinix "miss" Signal the Beginning of the End?

On October 5, Equinix announced that both their third quarter and full year revenues for 2010 would be 1 to 2 percent lower than previously forecast, although their EBITDA would be higher than previously forecast. The stated reasons include (i)"underestimated churn assumptions in Equinix's forecast models in North America", (ii) "greater than expected discounting to secure longer term contract renewals," and (iii)"lower than expected revenues attributable to the Switch and Data business acquired in April 2010."

So, what does this mean for the industry?

At first cut, it appears to be primarily the result of the shortcomings of Equinix's integration of Switch and Data, but I do think that this highlights some core trends that the industry needs to be aware of:
  1. Customers today consider moving a viable option - the move to virtualized environments and converged architecture make it easier for data center users to migrate to a new facility. This is amplified by the fact that many customers are currently implementing these solutions and require some type of migration - either in place or to a new facility.
  2. There is an efficient marketplace - the emergence of national/international databases, reporting, and competitive placement have made it very easy for customers to understand what market rates are and to obtain competing proposals.
  3. Capacity is not constrained in most markets - the facility expansions of recent years combined with improvements to many facilities mean that there is available capacity in most markets.
My takeaway is that it is finally time that the pure infrastructure companies need to start climbing the services stack. This will be the only way to maintain revenue and profit growth in the current developed markets in the US and Western Europe.

Monday, March 8, 2010

kW vs kVA in the Data Center – It really matters!

Understanding the difference between kVA and kW is critical to proper operation of a data center. The equation is simple – kW equals kVA multiplied by the power factor, but the implications can be a little more complex.

Power factor is the ratio of resistive to reactive power. In layman’s terms resistive power is burned on the spot whereas reactive power does not burn all of the power and “bounces” some back into the system. A lightbulb is an example of resistive power while an electric motor produces reactive power. Most electric systems and utility grade equipment are designed around a power factor of 80%. Most data centers, on the other hand, are almost completely resistive load and have power factors approaching unity (100%).

Since the data center power factor is high and, as such, kW is very close to kVA, some people discount the difference and use the two interchangeably. This is very dangerous and can lead to the creation of “phantom” capacity.

The fundamental limitation crops up in how much capacity critical equipment such as the generator and UPS have. Most of these systems are designed for a power factor of 80%, which means that they can only produce kW equivalent to 80% of kVA.

What does this mean. Let’s walk through a sample calculation. A data center is filled full of 120 V circuits where the customers are allowed to draw up to 16 Amps, which is equivalent to a kVA of 1.92. If we assume that this equipment has a power factor of 95%, this is equal to 1.82 kW. If we look at the other side, the 600 kVA UPS has a power factor of .8, so it produces 480 kW. So the capacity of the UPS is 480/1.82=263 circuits. If we had done “simple” division on the kVA side, we would have had a capacity of 600/1.92=312 circuits. So we actually have 49 fewer circuits than the “simple” math would have indicated, hence the reason that it is critical to convert all loads and capacities to kW!

Recently, manufacturers have begun to address this by designing systems with higher power factor ratings. A few companies are also marketing retrofit kits for existing UPS systems to increase the power factor rating.

Check back in a few weeks for the next installment – “Implications of Data Center Power Factor for the Utility Grid”.