Saturday, November 24, 2012

What's After Cloud?


As an advisor to some of the world’s largest companies, it’s my job to keep up with advances in technology.  I’m paid to answer questions like, “what’s after cloud?” I’ve thought a lot about this very question and I’ve formed my answer: “More Cloud”. I believe that many new innovations will be packaged as 'cloud' and the combined ecosystem of innovation will outweigh other non-server side contenders. 

Clouds promote increased automation, computing efficiency and increased service levels. Public clouds add the outsourcing model, while private clouds leverage existing infrastructure. Despite the value clouds offer, investments made in cloud computing by both vendors and buyers have been insignificant relative to the size of the opportunity. I believe that the next several decades will be dominated by a single computing paradigm: cloud.

From Structured Programming to Cloud Elasticity
The magic of cloud is the ability of a service to provision additional computing capacity to solve the problem without the user being aware.  Cloud offerings are divided into sub-systems that perform a specific function and can be called over a network via a well-defined interface. For the uninitiated, we call this a service-oriented architecture (SOA). Cloud offers a variety of services such as compute-as-a-service and database-as-a-service. The service-oriented approach allows an implementer to swap-out the internals of a service without impacting the users. This concept is borrowed from prior art (structured programming, OOD, CBD, etc.) While SOA extends prior paradigms to embrace distributed computing, cloud extends SOA to solve issues related the quality attributes or non-functional concerns such as scalability and availability. Cloud services respond to requests from various users/consumers where each request varies in complexity to the point where the amount computational power needed to satisfy the request will vary over time.

Encapsulated Innovation
The as-a-service model encapsulates (or hides) new innovations behind the service interface. For example, when Solid State Drives began delivering fast IO access at competitive prices, cloud storage services began using them under the covers. When new patterns and algorithms are invented we see them turned into as-a-Service offerings:
  •        Map reduce becomes the AWS Elastic MapReduce Service
  •        Dynamo and eventual consistency become AWS Dynamo / MongoDB-aaS
  •        Dremel becomes Google Big Query


Significant innovations will continue to unfold but the vehicle for delivering those innovations will be as-a-Service (SOA) with elastic infrastructure (cloud). Said another way, cloud will be awarded the credit for innovation because it is the delivery vehicle of the innovation.  This might seem like an inappropriate assignment of credit but in many cases the cloud model may be the only practical means of delivering highly complex, infrastructure intensive solutions. For example, setting up a large Hadoop farm is impractical for many users, but using one that is already in place (e.g., AWS EMR) brings the innovation to the masses. In this sense, the cloud isn’t the innovation but it is the agent that ignites its viability.

Metcalfe’s Law
A cloud is a collection of nodes that interact across multiple layers (e.g., security, recovery, etc.) As the collection of nodes grow, so does the value of the cloud. If this sounds familiar, it’s rooted in network theory (Metcalfe’s Law, Reed’s Law, etc.) To liberally paraphrase, these Laws state that the value of the network increases as more nodes, users and content are added to the network. I’d argue that the same model holds true for cloud: As the size of a cloud grows (machines, users, as-a-Service offerings) the value of the cloud grows proportionately.  Any solution that is able to accumulate value in a non-linear fashion becomes very difficult to replace. The traditional killer of network value propositions is when a new innovation kills the original, or when the network gets dirty (too costly, too complicated, etc.). In theory, SOA and the Cloud delivery model exhibit inherent properties that counter these concerns.

Incremental Funding
A significant attribute of cloud is that it grows ‘horizontally’. This means that a cloud operator can add another server or storage system incrementally. Unlike the mainframe, you can grow a cloud by using small, inexpensive units. This characteristic encourages long-term growth.  Anyone who has had to fight for I.T. budget will recognize the importance of being able to leverage agile funding models. It’s more than a nicety; it’s a Darwinian survival method during depressed times. Cloud, like a cockroach, will be able to survive the harshest of environments.

Data Gravity (Before and After)
Dave McCrory suggested the concept of Data Gravity: “Data Gravity is a theory around which data has mass.  As data (mass) accumulates, it begins to have gravity.  This Data Gravity pulls services and applications closer to the data. This attraction (gravitational force) is caused by the need for services and applications to have higher bandwidth and/or lower latency access to the data.” McCrory’s concept suggests an initial barrier to cloud adoption (moving data to the cloud), but also suggests that once it has been moved, more data will be accumulated, increasing the difficulty to move off of the cloud. This model jives with modern engineering belief that it’s better to move the application logic to the data, rather than the reverse.  As clouds accumulate data, Data Gravity suggests that even more data (and logic) will accumulate.

 The Centralization-Decentralization Debate
One of my first managers told me that I.T. goes through cycles of centralization and decentralization. At the time he mentioned it, we were moving from mainframes to client/server. He noted that when control was moved too far out of ones control there would be a natural reaction to remove power from the central authority and to regain enough power to solve your problem. Of course, cloud attempts to balance this concern. The cloud is usually considered a centralized model due to the homogenous nature of the data centers, servers, etc. However, the self-service aspect of cloud attempts to push power to the end user.   Cloud is designed to be the happy medium between centralized and decentralized; only time will tell if it satisfies this issue.

In summary, I believe that multiple large innovations are coming but many, if not most, will be buried behind an as-a-Service interface and we’ll call them cloud. When I watch TV, I’m rarely aware of the innovations in the cameras, editing machines, satellites or other key elements of the ecosystem. From my perspective, TV just keeps getting better (it’s magic). The cloud encapsulates innovation in a similar manner. In some ways, it is unfortunate new innovations will be buried by the delivery model but in fundamentally, it’s this very abstraction that will ensure its survival and growth.    

Monday, November 19, 2012

Amazon’s Cloud: Five Layers of Competition

Most people would agree: Amazon Web Services is crushing their competition. Their innovation is leading edge, their rate of introducing new products is furious and their pricing is bargain-basement low.

This is a tough combination to beat! How do they do it?

The Power of Permutations
Amazon’s offering takes a layered approach. New solutions are introduced at any of the Five Layers and are then combined with the other layers. By creating solutions with interchangeable parts, they’ve harvested the power of permutations via configurable systems.

Platform
Take an example starting with a new platform. Let’s imagine that Amazon were to offer a new Data Analytics service. They’d likely consider the offering from two angles: 1) How do we support current analytics platforms (legacy)? and 2) How do we reinvent the platform to take advantage of scale-out, commodity architectures? Amazon typically releases new platforms in a way that supports current customer needs (e.g., support for MySQL, Oracle, etc.) and then rolls out a second way that is proprietary (e.g, SimpleDB, DynamoDB) but arguably a better solution for a cloud-based architecture.

Data Center: When Amazon releases a new offering they rarely release it to all of their data centers at the same time. We’d expect them to deliver it in their largest center: the AWS East Region where it would be delivered across multiple availability zones. After some stabilization period, the offering would likely be delivered in all US regions, or even globally. Later, it would be added to restricted centers like GovCloud. Amazon is careful to release a new offering in a limited geography for evaluation purposes. Over time, the service is expanded geographically.

Virtualized Infrastructure: The new service would likely use hardware and storage devices best suited for the job (large memory, high CPU, fast network). It’s common to see Amazon introduce new compute configurations that were driven by the needs of their platform offerings. Over time, the offerings are extended to use additional support services. This might include things like ways to back up the data or patch the service. Naturally, we’d expect that as even newer infrastructure offerings became available, we’d be able to insert them into our platform configuration.

Cross-Cutting Services: For every service introduced, there are a number of “crosscutting services” that intersect all of the offerings. Amazon’s first priority is usually to update their UI console, which enables convenient administration of the service. Later, we’d expect the service to be added to their monitoring system (Cloud Watch), their orchestration service (CloudFormation) and ensure that it could be secured via their permissions system (I&AM). These three crosscutting services are key enablers to the automation story that Amazon offers.

Economics: Perhaps the only thing Amazon enjoys more than creating new cloud services is finding interesting ways to price them. For any new offering, we would expect  Amazon to have multiple ways to price the offerings. If it was for a legacy platform, we’d expect to be billed by the size of the machines and the number of hours that they ran, and the disk and network that they used. If it was a next-generation platform, we’d expect to be billed on some new concept – perhaps the number of rows analyzed, or rows returned on a query. Either way, we’d expect that the price of the offering will come down over time due to Amazon’s economies of scale and efficiency.

The Amazon advantage isn’t about any one service or offering. It’s a combinatorial solution. They have found a formula for decoupling their offering in a way that enables rapid new product introduction and perhaps more importantly it offers the ability to upgrade their offerings in a predictable and leveraged manner over time. Their ability to combine two or more products to create a new offering gives them ‘economies of scope’. This is a fundamental enabler of product diversification and leads to lower average cost per unit across the portfolio. Amazon’s ability to independently control the Five Layers has given them a repeatable formula for success. Next time you read about Amazon introducing XYZ platform, in the East Region, using Clustered Compute Boxes, hooked into CloudWatch, CloudFormation and IAM, with Reserved Instance and Spot Instance pricing – just remember, it’s no accident. Service providers who aren’t able to pivot at the Five Layers may find themselves obsolete.