CloudWorkspace

How Big Is A Piece Of Cloud?

Peter Judge
0 0 1 Comment

We need better ways of understanding what cloud providers actually offer, argues Peter Judge

When people grow up, they generally become more staid. Technologies do the same.

Cloud computing may never have been rock’n’roll, but in its early days it was scary and transgressive. This is no longer the case. And the strongest signs of the change could be summed up in two words. Words which only get used about established, sensible technologies. Those words are ‘measurement’ and ‘benchmarks’.

It’s time for cloud providers to start answering questions about how to measure the chunks of service they provide. And how to compare the performance those chunks deliver.

Comparing performance

In a mature cloud market, users will choose to buy services from a number of providers. At the moment, there are usually only a small number for any given service, and the choice will be on perceived reliability, security, and whether you expect the supplier to still be around in a few years’ time

So performance may not get a look in. After all, cloud provider A can sell you as much performance as cloud provider B just by adding some more resources and upping the price. That’s kind of the point of cloud.

At this stage, price won’t be the prime consideration, but as the market matures, the choice will increasingly be on price and performance. And to make that choice, users will need a sure way to compare the performance of different clouds, and to compare the size of the processing unit they get for a given price.

Movement on the first issue has begun, with SPEC (the Standard Performance Evaluation Corporation) announcing the launch of a group, OSGCloud, to define ways to measure cloud performance. SPEC deals with benchmarks – standardised loads, which can be run on all systems, to compare their performance. The cloud group will extend this to shared virtual services.

“Cloud computing is on the rise and represents a major shift in how servers are used and how their performance is measured,” said Rema Hariharan, chair of OSGCloud. “We want to assemble the best minds to define this space, create workloads, augment existing SPEC benchmarks, and develop new cloud-based benchmarks.”

This effort  won’t be done overnight, but it is definitely off the starting blocks. “The OSGCloud group is well beyond the theoretical stage and actively working on the benchmark,” group member Bob Cramblitt, of Cramblitt and Company, communications manager for SPEC, told TechWeekEurope. “They anticipate having something ready in about one year – getting the right datasets, establishing the testing parameters, ensuring a level playing field, and creating the metrics and reporting formats takes time and this is a voluntary group, all of whom have day jobs.”

The group is going to start at the IaaS (Infrastructure-as-a-Service) level, where Amazon and Microsoft exist, but may move on to PaaS (Platform as a service) and SaaS (Ssoftware as a service).

But how big is a piece of cloud?

That will eventually answer the question of measuring the performance of a piece of cloud that you buy from a cloud provider. But it doesn’t answer the immediate problem of how you compare the size of the piece of cloud you have bought.

We’ve now reached a stage where users can buy cloud services from different providers, and need a clear way to compare the size of the piece of cloud they get from each one.

Comparing RAM and storage is fairly easy (though a thorough buyer will want to know access times and reliability). But how about comparing CPU?

Amazon Web Services (AWS) uses its own proprietary measurement – the Elastic Compute Unit (ECU), which has an equivalent CPU capacity of a 1.0 – 1.2 GHz 2007 Opteron or 2007 Xeon processor.

Rackspace uses a different non-standard unit, selling processing power in “compute cycles”, which are “roughly equivalent to running a server with a 2.8 GHz modern processor for the same period of time”.

Other cloud providers use their own measurements. For instance Lunacloud uses the “vCPU” which is equivalent to  a 1.5 Ghz 2010 Xeon processor.

Amazon and Lunacloud are being quite helpful in mentioning what type of processor they are offering up a virtual equivalent to, but their units won’t be objective. One vendor’s Xeon machine using the same processor may perform differently to another’s – that’s the reason we have SPEC benchmarks for real physical servers.

“Developing common standards in data security, sovereignty and privacy is rightly occupying the focus of many in the cloud industry,” says Antonio Miguel Ferreira, CEO of Lunacloud. “However, we need complete transparency so that end-users can easily compare every part of a cloud providers’ market offer with their competitors.”

I think we need a two-fold strategy here. First, let’s get cloud providers to be specific about what level of CPU their cloud service should approximate to, so we can get a rough idea of how much processor we get for our money.

Second, let’s get the SPEC benchmarks together so we can check whether those promises are true – and have a Cloud-SPECmark unit to objectively compare the performance of cloud services.

Till the benchmarks are ready, you can measure your own cloud performance – with our quiz!

  1. Very good summary on the need for the industry to agree on benchmarks, to make comparisons easier for customers.

    The cloud industry is getting more mature and going through the same process hardware vendors went through maybe 20 years ago, when SPEC benchmarks became more commonly used.

    The higher you go on the cloud stack (IaaS, PaaS, SaaS) the more difficult it will be to compare though. But for IaaS it is definitely possible.

    Price, Performance, SLA, Security/Trust are key parameters in choosing the right cloud provider, but Price and SLAs are the only objective parameters these days.