Chris Gaun

A member of the Gartner Blog Network

Chris Gaun
Research Analyst
4 years at Gartner
7 years IT industry

Chris Gaun is an Analyst with Ideas Research. ...Read Full Bio

A Price Premium for Components of Integrated Systems?

by Chris Gaun  |  March 29, 2013  |  Submit a Comment

Comparing integrated systems, which deliver a combination of server, storage and/or network devices in a pre-integrated stack coupled with software, can be difficult. By their nature, integrated systems typically have unique value propositions that derive from a particular strength of their respective vendors. Further, due to packaging differences, it can be challenging to compare the prices of integrated systems from different vendors. Intuitively, it seems clear that vendors expect to charge a premium for integrated systems in exchange for the total cost of ownership (TCO) savings that users can achieve from avoiding the challenges of integrating different systems components and software themselves. However, users may be concerned that vendors will also charge premiums for the components of these systems, in exchange for the “privilege” of supporting these components in an integrated systems environment. An informal review of the pricing for some the server components of several integrated systems shows that after considering a typical discount, there may not in fact be a significant uplift on the price for components of these systems.

Pricing of a particular Cisco UCS server configuration compared to a similar server by a competitor.

Vendors typically claim that integrated systems have lower TCO than an equivalent set of commodity components that are deployed separately. For example, marketing for Cisco’s UCS server, which makes up the compute segment of Cisco and Netapp’s FlexPod integrated system, claims to save money on “network interface cards (NICs), host bus adapters (HBAs), cables, and switches.”  Indeed, integrated systems may save on labor and other operational costs for a business, while making that business more competitive. The question then becomes: what is this convenience worth to a particular organization?

Gartner’s Tech Planner tool can be used to find pricing for many of the server components of integrated servers, down to the part level, and to discover competing products from all the server vendors (other components of an integrated system – e.g. networking – require a separate exercise). Based on a quick check of this data, some preliminary conclusions are as follows:

  • Users may find that Cisco UCS blades have a higher non-discounted price when compared to competitors’ non-discounted price (leftmost bar in graph above). However, based on pricing information from resellers, government contracts, and industry standard benchmarks, the servers are discounted to be competitively priced with other vendors’ similar products (rightmost bar in graph). The percent comparison on discounted vs. discounted pricing was so close to be within a margin of error.
    • Preliminary Takeaway: Publicly available pricing – i.e. non-discounted – often sets an upper bar under which less information is known by the customer and negotiation takes over.  The discounted price on a Cisco UCS server may be less than the competitor’s non-discounted price (middle bar in graph). Like all negotiations, it is important to get as much information on discounting as possible to ensure competitive pricing.
  • Vendors do not charge large premiums for individual parts for the servers of integrated systems. Although not graphed, a comparison of two complete identical builds of HP Matrix servers (one collected from parts, and the other assembled) shows no large pricing uplifts for a ready built complete server, compared to a bill of materials for that server. Furthermore, a quick side-by-side comparison of other integrated system server parts does not show a large markup on individual parts that are available elsewhere – e.g. Intel processors.
    • Preliminary Takeaway: It is wise to consider the integrated system server as a whole. Digging deeper to a very detailed level on parts may not be warranted with integrated systems hardware with commodity parts.
  • The server hardware in integrated systems, plus support, will have very similar pricing for equivalent products from different vendors.
    • Preliminary Takeaway: It is important to do research into different products on the market, but most of the differentiation (savings or premiums) may be found higher up on the solution stack, or in the simplification of networking.

The above are all very early thoughts and more results are needed before it becomes actual research. The next step is to determine the difference in total cost of ownership of cabling, switches, networking and all the other cost in an integrated system. The goal is to derive a bill of material and compare with the cost of a pre-built system, which gives the premium. At first glance, though, it appears that vendors are not counting on marking up the components of integrated systems, but on charging for value offered at higher levels of their integrated stacks.

Follow Chris Gaun on Twitter

Note: This is an individual analyst’s blog and not a piece of peer reviewed, actionable, Gartner research.

Submit a Comment »

Category: Total Cost     Tags:

Selecting Hadoop Server Hardware For Big Data Workloads

by Chris Gaun  |  March 11, 2013  |  1 Comment

IT organizations can deploy a Hadoop cluster by buying an appliance, going to IaaS/PaaS public clouds, or building the solution themselves. Many major enterprise hardware vendors offer Big Data Hadoop appliances, as shown in the table below (Merv Adrian has more detail on the systems. Adrian, Arun Chandrasekaran, Svetlana Sicular and Marcus Collins are my go-to analyst for Big Data Hadoop questions):

Vendor Appliance Name
EMC Greenplum Data Computing Appliance (DCA)
HP AppSystem for Apache Hadoop
IBM PureData System for Analytics
NetApp Open Solution for Hadoop
Oracle Big Data Appliance
Teradata Aster Big Analytic Appliance

 

These appliances provide an out-of-the-box solution for customers who want to do as little as possible to get Hadoop up and running. However, there will always be some organizations that prefer the DIY approach, the same way some leading-edge users set up HPC clusters themselves using Beowulf. These users will select standard servers for handling different Hadoop functions, which they then assemble into a complete Hadoop environment. Cloudera, a Big Data Hadoop provider, certifies hardware specifically for this purpose. Cloudera-certified servers include the Supermicro 815 and Supermicro 826, which are optimized for Hadoop Name Nodes (which should be memory-heavy) and Data Nodes (which are more storage-heavy), respectively. It is possible to use Gartner Tech Planner to find other systems that have processors and other specifications similar to the two mentioned Supermicro servers. The table of servers below shows some candidates:

Suitable for Hadoop Name Node Suitable for Hadoop Data Node
IBM – System x3550 M4 IBM – System x3630 M4
Dell – PowerEdge R620 Dell – PowerEdge R520
HP – ProLiant DL160 Gen8 HP – ProLiant DL380e Gen8
HP – ProLiant DL360 Gen8

 

The list is incomplete, and there are likely to be other vendors and server products that are well suited for deploying Hadoop. Please use the comment section if you know of any other servers that fit into these two groups.

Follow Chris Gaun on Twitter

Note: This is an individual analyst’s blog and not a piece of peer reviewed, actionable, Gartner research.

1 Comment »

Category: Big Data     Tags:

Big Data and Hadoop are leading scientists to ask bigger questions

by Chris Gaun  |  February 22, 2013  |  Submit a Comment

I have a new article article at Quartz (a relatively new magazine under The Atlantic Group).

In 1905, Albert Einstein derived that light was composed of particles by fitting his theory to just a handful of data points. This discovery changed our understanding of basic physics and helped usher in a new era of quantum mechanics. Today, scientists often need to interpret much larger data sets to drive discoveries.

A little more than a decade ago, the first sequencing of a human genome cost $100 million. Now, the same results cost no more than a used car. At about 0.8 to 1 terabyte, the full genome creates more than 4 million times the amount of data that Einstein was investigating. Some scientists and researchers are using tools that were developed by online commerce and search engines to tackle these new questions.

In 2003 and 2004, Google published two papers that explained how the company repeatedly digests almost the entire internet to collect data for our searches every couple days and, eventually, hours. (Google recently moved away from this system of indexing onto something new that could log the Web in real-time and scale up to millions of machines.) The findings shook the industry. Often, to process tons of information, companies bought very expensive, very reliable, very fast computers that churned data as quickly as the newest technology could. Budgets being budgets, only a few of these premium boxes were in place at any one time. Instead, Google segmented the work into small pieces that were distributed onto thousands of cheaper computers that could produce the type of intelligence that we are now accustomed to in searches. If the old way was a single farm to grow flowers and collect pollen, then this new system was thousands of pollen-hoarding bees that distributed themselves to fields far and wide. The less expensive hardware now being employed to crunch data meant more computers were afforded in a budget while maintaining reliability. If a few computers went down, there were thousands left to pick up their duties.

Some scientists, often working on shoestring funding, thought they could greatly benefit from this approach. Before that could happen though, the vague description in Google’s papers had to be developed into a more concrete system. Yahoo, and others, helped do just that by developing a free version of Google’s methods called Hadoop.

Making sense of large distributed data through splitting up the processing and coalescing of small data chunks is exactly what Hadoop was designed to do. Soon, scientists were powering software with Hadoop to accomplish exactly the tasks needed for genome research. Another innovation was needed though. Hadoop was built to work on a large number of cheap computers, but scientists don’t have thousands of computers like Google and Yahoo. The solution came from the ability to obtain these resources using Infrastructure as a Service cloud computing.

READ THE REST: http://qz.com/55503/big-data-is-leading-scientists-to-ask-bigger-questions/

Follow Chris Gaun on Twitter

Note: This is an individual analyst’s blog and not a piece of peer reviewed, actionable, Gartner research.

Submit a Comment »

Category: Uncategorized     Tags:

Apple, Amazon and IBM Show Possible Pitfalls of Beloved High Profit Margins

by Chris Gaun  |  February 12, 2013  |  11 Comments

Follow Chris Gaun on Twitter
Last Tuesday, IBM cut the price of some of its POWER line to under $6,000. Steve Lohr of the New York Times observed that this move is designed to entice customers interested in analytic workloads and “notably Hadoop.” One of the better introductions to Hadoop that I’ve seen is Robert Scoble’s interview with the CEO of Cloudera, Mike Olson (video below).  Olson reiterates that relational databases will always be important but he also notes that open source Hadoop was designed with clustered commodity servers in mind. In comparison, the huge shared memory servers that power traditional big databases often sell for high margins. Some of these boxes need to compete against commodity to win customers in the Hadoop space, and the price drop is indicative of that.

There are various reasons high margins can exist. Higher margins may appear when there are large barrier to entry or if customers are “vendor locked” into a single product line.

When Apple and Amazon released their financial results over the past few weeks many were puzzled at the equity market’s response. Apple was wildly profitable while Amazon lost money, yet the former was punished while the latter rose in the stock market. Eugene Wei writes that low margins may answer part of the mystery:

An incumbent with high margins, especially in technology, is like a deer that wears a bullseye on its flank. Assuming a company doesn’t have a monopoly, its high margin structure screams for a competitor to come in and compete on price, if nothing else, and it also hints at potential complacency. If the company is public, how willing will they be to lower their own margins and take a beating on their public valuation?

Hat-tip to Derek Thompson at The Atlantic for finding the above quote.

Now, Apple has patents, which grant a limited time monopoly. There also may be cost in switching smartphone operating systems. That said, many companies saw Apple’s high profits per good sold and the competition is now fierce. On the other end, Amazon is competing with the brick-and-mortar low margin retail business, but has a lead in the online realm where huge data centers, fulfillment centers, and other high cost expenditures are needed for entry. The outcome: Apple is often priced at single digit multiples of its earnings while the market is ok with Amazon having hundreds of times the stock price of its profits.  According to Wei, that’s because the market believes Apple is more easily assailable than Amazon.

It seems then that technology and the market may abhor precious high margins at times.

Note: This is an individual analyst’s blog and not a piece of peer reviewed, actionable, Gartner research.

11 Comments »

Category: Total Cost     Tags:

Amazon’s Cloud Revenue is Important Information for Potential Customers

by Chris Gaun  |  January 30, 2013  |  2 Comments

Amazon released another quarterly report yesterday and there was still no indication that the cloud AWS segment will be broken out from its grouping with non-cloud products. Today, AWS revenue is recognized in the “other” revenue category with non-cloud products while most of Amazon’s business is in the “Media” and “Electronics and other general merchandise” categories. Predicting the broader “other” revenue is possible and I have done so in The Atlantic, but the exact percent contributed to Amazon’s cloud business is much trickier – as mentioned in a previous blog. The size of Amazon’s cloud business is important information for customers, as it can help influence their own decision to invest in AWS.

20130130-151834.jpg

One possible indicator for the relative growth of Amazon’s cloud business is the overall amount of money that is being spent on products, in the form of capital expenses. Indeed, as the chart above shows, Amazon’s capital expenditures have increased substantially over past few years. However, as part of these investments, Amazon added 20 new consumer fulfillment centers in 2012 – for a total of 89 – and that information is impossible to separate from possible cloud related investments – e.g. data center equipment, software cost etc.

Other indicators of Amazon AWS growth this quarter were four out of the twelve highlights of the report (source):

• Amazon Web Services (AWS) announced the launch of its newest Asia Pacific Region in Sydney, Australia, now available for multiple services including Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), and Amazon Relational Database Service (RDS). Sydney joins Singapore and Tokyo as the third Region in Asia Pacific and the ninth Region worldwide.
• AWS announced that SAP Business Suite is now certified to run on the AWS cloud platform. Enterprises running SAP Business Suite can now leverage the on-demand, pay as you go AWS platform to support thousands of concurrent users in production without making costly capital expenditures for their underlying infrastructure. AWS also announced that SAP HANA, SAP’s in-memory database and platform, is certified to run on AWS and is available for purchase via AWS Marketplace.
• AWS continued its rapid pace of innovation by launching 159 new services and features in 2012. This is nearly double the services and features launched in 2011.
• AWS has lowered prices 24 times since it launched in 2006, including 10 price reductions in 2012.

So AWS is clearly growing – the question now is, “by how much?” To that point, groups purchasing the service have little information. The company’s financials are not immaterial for mission critical application deployments and other reasons. For example, if the service currently has a large burn rate, then that might indicate that the current prices are loss leaders and will eventually rise. Or, it may mean the service is perfectly solvent and profitable in long-term, but is growing – spend money to make money. Or, it might have many other explanations. It is impossible to know without more information.

Follow Chris Gaun on twitter

Note: This is an individual analyst’s blog and not a piece of Gartner research.

2 Comments »

Category: Uncategorized     Tags:

How Can Public Clouds Reduce Business Risk For Users?

by Chris Gaun  |  January 28, 2013  |  5 Comments

Follow Chris Gaun on Twitter

During a recession, purchases drop. Customers using public IaaS cloud computing often only need to commit to services for a small amount of time – typically an hour or a month. That shifts inventory risk, i.e. the chance that products won’t sell or that the price of those products will drop, to the cloud IaaS vendor.

Public IaaS cloud computing is computing sold on a utility based model where customers only pay for the services used. A server that will sit in the closet unplugged has the same initial outlay as the one that is used to power mission critical workloads for profitable business. Cloud is different. A customer does not need to pay for a cloud instance that remains completely idle. For the cloud vendors, this may mean that when business profits go south, customers turn off unused resources. During cyclical downturns, many companies may all feel pinched at the same time. The graph above shows that electricity usage, the canonical example of a utility resource, follows that trend.

During recessions, purchasing cloud resources can have a number of advantages over purchasing physical IT assets. First, recessions aren’t exactly predictable. Purchases made right before – or at the start of – a downturn generally meet the requirements of pre-recession workload trends – this over-provisioning is a form of inventory risk. The purchases of physical assets at the low point of a business cycle may indeed be less, but that low point won’t last very long because GDP downturns are generally a blip. The use of public cloud IaaS, resources on the other hand, would more closely conform to the cyclical events of an economy, thus incurring less spending and risk for customers.

 

5 Comments »

Category: Cloud     Tags:

Facebook vs. Google Search? Markets say “no.”

by Chris Gaun  |  January 18, 2013  |  2 Comments

Facebook announced a search engine this week called Graph Search. Many in the media and Twitter have focused on the competitive position of the new product against Google. No doubt, when people hear “internet search” they think of the companies’ with the largest market share – Google and Bing. The finance community investing in the companies saw it differently, though.

Even if markets are not perfectly efficient, they do generally bake in news related to future earnings of a company into stock prices rather quickly. For example, search Dell’s stock price on the day of the rumors it would go private. There is a huge leap in the price. Looking at the graph of Google’s stock price (blue line) the day Facebook search was announced does not reflect lower future earnings expectations. It is flat-line, as if no news came in.

Markets may think Facebook’s search will bomb as an alternative to Google, but other services may be deeply impacted. For example, look at Yelp’s price during the announcement in the afternoon (green line). Now that is what the impact of a major new competitor in a company’s market can look like.  Often tech and finance don’t interact much, but here is a good example where the information from the equity markets can actually help tech industry sharpen their analysis. Rebecca Rosen in The Atlantic points to Yelp, LinkedIn, and OKCupid in the title of her analysis of the Facebook announcement.

I’m not sure if 5,000 Facebook friends (which was the maximum number of “friends” last time I checked) would ever suggest the same New York City restaurant more than twice, but this the kind of value that Yelp clearly delivers. Granted, “pictures of my cousin’s baby” has potential with ads for infant clothes, etc. but that is not something that people use Yelp for. That is why I do analysis and show deference to markets for predictions!

Gartner has research for technology investors that builds on independent and objective industry research: http://www.gartner.com/technology/research/gartner_invest.jsp

Follow Chris Gaun on Twitter: http://twitter.com/chris_gaun

2 Comments »

Category: Uncategorized     Tags:

With Writing on the Wall in PC Market, Vendors Forced To Evolve

by Chris Gaun  |  January 15, 2013  |  2 Comments

Several data points point to a secular change in the PC market. First, Gartner said yesterday that PC sales dropped 4.9% in the last quarter of 2012, and that the change was partly due to the rise of tablets. This table in Gartner’s press release paints a better picture:

Note: Data includes desk-based PCs and mobile PCs, including mini-notebooks but not media tablets such as the iPad. Data is based on the shipments selling into channels.
Source: Gartner (January 2013)

Company

4Q12 Shipments

4Q12 Market Share (%)

4Q11 Shipments

4Q11 Market Share (%)

4Q12-4Q11 Growth (%)

HP

14,645,041

16.2

14,711,280

15.5

-0.5

Lenovo

13,976,668

15.5

12,915,766

13.6

8.2

Dell

9,206,391

10.2

11,633,387

12.2

-20.9

Acer Group

8,622,701

9.5

9,690,624

10.2

-11.0

ASUS

6,528,228

7.2

6,133,042

6.5

6.4

Others

37,393,913

41.4

39,934,184

42.0

-6.4

Total

90,372,942

100.0

95,018,284

100.0

-4.9

FULL PRESS RELEASE: http://www.gartner.com/it/page.jsp?id=2301715

HP, Dell, and Acer all saw declining PC sales year-over-year.

Second, Dell is rumored to be seeking funding in order to take the corporation private. If it takes place, this leveraged buyout will be in the top 10 of all time in inflation adjusted terms according to the tally by Business Insider. My friend over at Quartz, Simone Foxman, speculates that declining PC sales might be partly responsible for this move by the company with $18+ billion market capitalization. If I can sum up Simone’s reasoning here: PC-heavy vendors like HP have been hammered in the equity markets. The turnaround needed by consumer PC makers is not going to be without bumps and the changes might be easier to make in a private company. Dell has been hit by equity markets as well (chart above).

A year ago, Michael Dell missed tablets influence on PC market.

HP and Dell have both been trying to increase the percentage mix of revenue that comes from enterprise. They are increasing cloud offerings and buying companies to offer more enterprise options. Dell was one of only two hardware vendors in the 2012 Magic Quadrant for Cloud Infrastructure as a Service (the other was Fujitsu). These efforts will be a critical part of Dell’s strategy to evolve as the traditional PC market, which has long served it so well, settles into its golden years.

2 Comments »

Category: PC     Tags:

Can Mainframes be the Least Expensive Option?

by Chris Gaun  |  January 14, 2013  |  2 Comments

IBM 704 mainframe circa 1957; SOURCE: NASA

“…mainframe is still the digital workhorse for banking and telecommunications networks — and why mainframes are selling briskly in the emerging economies of Asia and Africa.” ~ Steve Lohr in 8/28/2012 New York Times

Mainframes are not small purchases. Customers who purchase mainframes, and the vendors that sell them, say that the value (and premium) comes from reliability, stack integration, and other benefits. Switching to an alternate architecture can also be more expensive than staying on Big Iron – sometimes called “vendor lock.” However, maybe in some situations they’re the least expensive option. The notion is hardly consensus, but it’s worth investigating the possible avenues where mainframes can be cheaper than commodity.

Steve Lohr noted above in the New York Times that mainframe are “selling briskly” in Asia and Africa. The “vendor lock” explanation doesn’t hold up in these cases because emerging markets are probably building out and not switching equipment. The reliability, stack integration, and other possible benefits mentioned before might be the sole decision point. However, it might be the case that delivering a solution in a place where there is little technical expertise is simply cheaper. Linux & .Net data centers work because there are lots of people in the labor market that are familiar with the hardware, OS, and applications. That may not be true in emerging markets.

A similar pattern is sometimes seen in the converged solution (appliance) market. It can be cheaper to drop-in a ready built solution than finding people who can build it for cheap, and have it work! In the enterprise, even a small chance that there is a mistake can draw out projects, lead to more purchases, and hurt a business’ competitive advantage. It is possible that the cheapest solution then, is something that comes with a premium.

Gartner research, i.e. not my theoretical musings in this blog, has great advice on this topic for customers. I’d suggest Mainframe Application Liability: Don’t Get Left Behind by Kristin R. Moyer, Juergen Weiss, Dale Vecchio and, more generally, the work of Nik Simpson on Unix servers.

Chris Gaun on Twitter: http://twitter.com/chris_gaun

2 Comments »

Category: Total Cost     Tags:

How Tech Pricing is Like Airplane Tickets

by Chris Gaun  |  January 10, 2013  |  5 Comments

Image Source: Boeing Dreamscape

The AP recently pointed out that technology-driven ticketing services are now struggling with airlines that withhold some pricing data from third parties. It explains:

Global distribution systems that supply flight and fare data to travel agents and online ticketing services like Orbitz and Expedia, accounting for half of all U.S. airline tickets, complain that airlines won’t provide fee information in a way that lets them make it handy for consumers trying to find the best deal.

The airlines do usually supply their basic costs to get from point A to B. This stripped pricing, with no extras, is what is needed by travel services to produce an apples-to-apples comparison. Similarly, Tech Planner from Gartner uses basic server configurations to help users come up with ball-park comparisons of the costs of different suppliers. Users can add options from the Tech Planner database to these basic server configurations. But, they should not compare a heavily provisioned server (say, with tons of memory) with a light one from an alternative supplier. Still, the ability to perform objective, baseline comparisons of the costs for different solutions is an important requirement for enabling a healthy market. What online ticketing services aim to accomplish in the travel market is enable a comparison of total travel cost, a variable that IT customers struggle with as well. The airlines want access to online markets – which are enabled by the third-party, tech-driven ticketing services – but they do not want to commoditize options like extra leg room. A perfectly competitive system, which commodities come closest to, earn normal profits. With normal profits, the cost to produce one unit of a good (including management cost, interest on capital, etc.) is equal to the revenue received for that good. There is no extra profit on top of the cost. Analyst spend a lot of time comparing one technology vs. another, and pricing has to be part of that process. As with airline ticketing services, delivering objective pricing comparisons makes the IT market more competitive and drives down the cost for end users.

5 Comments »

Category: Total Cost     Tags: