Friday, April 24, 2015

What is the Equinix Cloud Exchange?

Anyone following Equinix has heard a lot about Cloud Exchange over the last few quarters.  Cloud Exchange seems to be a magic potion that finally helped Equinix jumpstart growth in sales to enterprise customers, but what is it and is it really important and durable as a product?

Cloud Exchange is based on a large, high capacity Ethernet switching fabric in select Equinix data centers.  Cloud Exchange is separate and distinct from the Equinix network exchange product.  It appears that part of the impetus for the Cloud Exchange came from an investment that Equinix made in Ethernet equipment to support a VPN product several years ago, and Equinix states that "Equinix Cloud Exchange leverages the Equinix Ethernet Exchange Platform."

Cloud Exchange is being offered by Equinix as an all optical product with available port sizes of 1 G Multi-mode, 1 G Single-mode, and 10 G Single-mode.  Unlike the Internet Exchange product, Cloud Exchange allows for creation of Virtual Circuits to a given providers at speeds of 200 Mbps, 500 Mbps, or 1 Gbps.  All of these services are provided under the Equinix Ethernet Exchange SLA of 99.999% uptime.

Now that we have a perspective on what the Cloud Exchange product is, we can examine who the users are, how they are using it, and how important it is long-term.

The enterprise use cases seem to be driven by one or more of the following three concepts:

  1. Existing WAN (typically) MPLS is available at Equinix - These are customers that currently have some or all of their data center footprint in an Equinix facility and, more significantly, have established the Equinix facility as a node on their Wide Area Network (WAN).  Since WAN performance is established and understood at the location, it makes for a logical location to connect to and integrate external cloud resources.
  2. Installing Private (or Hybrid) cloud within Equinix - If an enterprise intends to closely integrate their own systems with a cloud provider as may be done in a hybrid cloud installation, locating that equipment in an Equinix facility with Cloud Exchange is going to be an effective and cost efficient way to provide a reliable, low latency connection to the public cloud provider(s).
  3. Implementing a Cloud Data Replication strategy - In response to the challenges of ownership, security, and back-up of data, many enterprise customers are installing their own infrastructure to maintain a record copy of the data from their cloud providers.  Sometimes this is set up as continuous replication, while other times it is done at regular intervals.  Locating the equipment to do this in a high quality data center with direct cloud access increases the integrity of the implementation.
All of these have combined to help boost Equinix enterprise business, which is important to them as a business as Enterprise customers tend to be fairly high margin and less sensitive to squeezing out cost as many service providers have become.

Direct connectivity is an option to Cloud Exchange that some customers, particularly larger ones, utilize.  As it sounds, this is a dedicated connection to a cloud provider access point either through a cross connect in a data center or through a network connection between two separate facilities.  Direct Connectivity lacks the "try before you buy" and ease of switching of the Cloud Exchange.

In general, all of these products as well as others not yet in the market, will grow as cloud adoption moves forward.  In particular, I would expect one or more network providers to establish a cloud exchange within the network that would have some of the same characteristics and similar Service Level Agreements.  This would allow customers to be data center independent and give them even more flexibility.

Thursday, April 4, 2013

Data Center Trends over the Last Two Years

As I move firmly back into the Data Center Industry, I have been visiting a lot of facilities, reviewing new technologies, and focusing on opportunities to change the way we do things and reduce cost.

In particular, the following trends have become more pronounced and clear over the last four years:

  • The number of sophisticated data center customers is steadily increasing.  Data on space, power, and network cost is more readily available in the market, and the competitive marketplace is stronger than ever.  By and large this has not driven down cost across the board, but it has for the sophisticated clients, and it has slowed price increases for others.  The magnitude of this can be stunning.  I have seen reductions as high as 30%+ on very sizable contracts.
  • It is all about cost per kW and Mbps.  The only way to truly compare is to pull out the costs for these two elements -kW cost rolls up, space, cabinet, cooling, and all aspects of power - utility, back-up, UPS, etc.; -Mbps rolls up cross connect, circuit, peering, usage, etc.  These numbers vary widely if customers have not been properly educated and focused on them.  I routinely see kW costs as high as $600 per month and as low as $100 per month; and Mbps costs as high as $300 and as low as <$1.
  • Private Cloud is real and it is devouring traditional federated infrastructure.  Very few bare metal servers or storage are deployed today.  Almost everything is going in under virtualization and much of it is dedicated to a single enterprise.
  • Energy savings are real, and complex.  There are a number of themes here - delivering power more efficiently, reducing idle time, raising data center temperature, employing new cooling technologies.  Given that the power bill on an "old school" data center in New York can be $2 million per MegaWatt annually, and these technologies can save more than half that cost, this moves the budget needle, and is good for the world.  (In a future blog I will delve into how these technologies can substantially increase capacity and extend the life of data centers that are currently tapped out.)
  • Data Center reliability is over-rated.  As more applications move to the cloud, the ability to deliver these is at least as dependent on the network as it is on the data center.  Increasingly, concepts like High Availability, Fault Tolerance, Continuous Availability, and related methodologies depend on data, compute, and network distributed across multiple locations.  The delivery systems can survive the loss of one or more locations, so the importance of reliability at locations is reduced.  This has breathed new life into a plethora of Tier 2 data centers with N+1 or even N back-up.  Many service providers are pushing the data center developers to offer this as a new product to further cut their cost structure.
Over the next days, weeks, and months I will dig deeper into these and other concepts shaping the future of the Data Center Industry.

@datacenterguru is back to blogging

@datacenterguru took a little hiatus from this blog, and to some extent the data center world, to build the Transformation practice for Telwares.  Two years later, that process is complete, Telwares has been sold, and @datacenterguru is more focused on Data Centers than ever.  It is time to rock the boat, innovate, join an interesting and innovative company, or start one myself.  So with that said, let's get to it.

Saturday, April 23, 2011

Iceberg of De-materialization, Horizon of Commoditization, and Theories of Cloud Universe

Earlier this month, I had the pleasure of attending to an MIT Enterprise Forum Fireside Chat by Bill Joy of Kleiner Perkins (Jason Pontin was the moderator). Among many other things, Joy spoke of a concept that he calls the "Iceberg of De-materialization." He defined this as changes in technology that undermine entire products and industries. He cited the recent example of Cisco's decision to shelve the Flip camera and postulated that the video capabilities of iPads, iPhones, and similar devices are eliminating the need for a stand alone camera.

This reminded me of a term that I have used at various points over the last decade - "Horizon of Commodization." It is perhaps a little different twist on the concept as it applies to providing services.

My Horizon of Commoditization assertion is that there is a moving line between what buyers spend time thinking about versus buying as a commodity like milk or eggs. Such commodity purchases are often decided by cost alone, with relatively little weight given to differentiating features.

In the data center business, the march to cloud services is very much like this. People want to buy compute or storage without knowing the details of how it is built and operated. This approach has a number of implications:
  • public cloud is dominated by companies with tremendous economies of scale
  • decisions are made on price alone with much less attention to "details" like quality of service
  • customers also focus little thought on where the computing and storage actually happen and the quality of network access to those points
Some people believe that this could ultimately lead to dismantling of some aspects of the hardware and data center industries. This is, however, counterbalanced by the fact that the cloud still requires equipment, it must be housed somewhere, and data and applications are growing consistently.

The concepts of De-materialization and Commoditization have an even darker side. Does the person who uses their iphone rather than a traditional camera produce a better quality photo or video....probably not. Does the person that buys cloud without understanding design, performance, and reliability aspects serve their company well... probably not. There are still basic concepts that sit "behind the horizon" that need to be considered. As consumers we need to resist the temptation of these screens of opacity and make certain that what we are buying meets (all of) our needs.

So does this create a business opportunity for the companies providing these products and services? It has certainly worked for Apple on the product side, but how do we unlock the same treasure for a services business?


Wednesday, November 3, 2010

Latency and Data Center Siting

Today, there is much emphasis on the ability of cloud services to be located in mammoth data centers ideally suited to maximize low cost power and climatic or environmentally favorable cooling methodologies.

So, why don't we move everything to such facilities? There are a number of reasons:
  • Companies whose system require care and feeding by a staff in a specific geographic location.
  • Privacy laws that require the data be kept within certain boundaries.
  • Disaster recovery and accessibility concerns.
  • Physical latency which is the time that it takes to communicate with the facilities.
Today, I am going to address the issue of latency, which at its core is limited by physical laws of nature.

Latency is the amount of time, generally measured in milliseconds (ms) which represent 1/1000 of a second. It is generally expressed as "round trip latency", or the time it takes the information to make it from point a to point b and back to point a. This is in part because of the traditional tools (i.e., "ping") used to measure latency and also to simulate the effective time to make and confirm a transaction.

Round trip latency between New York and London is about 100 ms, and New York to Los Angeles is about 70 ms.

This delay is important in that it impacts the performance and business aspects of many transactions. Program trading operations, for example, want to minimize latency as much as possible and, as such, want to locate within 20 miles of the "trading" center and have latency that is below 2 ms. Many other applications perform best with latencies that are less than 20 ms, implying a distance that is less than 700 miles.

The tolerance of applications is increasing and many have become virtually latency insensitive. It is this class of applications that can easily be moved to a data center optimally located for power and cooling.

The more sophisticated IT departments now segregate their applications by latency, as well as other issues. This allows them to group what must be done at local vs remote data centers, and may include several different gradations along the way - such as one data center within 20 miles, one within 250 miles, one within 500 miles, etc.

Friday, October 22, 2010

Does the Equinix "miss" Signal the Beginning of the End?

On October 5, Equinix announced that both their third quarter and full year revenues for 2010 would be 1 to 2 percent lower than previously forecast, although their EBITDA would be higher than previously forecast. The stated reasons include (i)"underestimated churn assumptions in Equinix's forecast models in North America", (ii) "greater than expected discounting to secure longer term contract renewals," and (iii)"lower than expected revenues attributable to the Switch and Data business acquired in April 2010."

So, what does this mean for the industry?

At first cut, it appears to be primarily the result of the shortcomings of Equinix's integration of Switch and Data, but I do think that this highlights some core trends that the industry needs to be aware of:
  1. Customers today consider moving a viable option - the move to virtualized environments and converged architecture make it easier for data center users to migrate to a new facility. This is amplified by the fact that many customers are currently implementing these solutions and require some type of migration - either in place or to a new facility.
  2. There is an efficient marketplace - the emergence of national/international databases, reporting, and competitive placement have made it very easy for customers to understand what market rates are and to obtain competing proposals.
  3. Capacity is not constrained in most markets - the facility expansions of recent years combined with improvements to many facilities mean that there is available capacity in most markets.
My takeaway is that it is finally time that the pure infrastructure companies need to start climbing the services stack. This will be the only way to maintain revenue and profit growth in the current developed markets in the US and Western Europe.