Thomas Bittman

A member of the Gartner Blog Network

Thomas J. Bittman
VP Distinguished Analyst
18 years at Gartner
29 years IT industry

Thomas Bittman is a vice president and distinguished analyst with Gartner Research. Mr. Bittman has led the industry in areas such as private cloud computing and virtualization. Mr. Bittman invented the term "real-time infrastructure," which has been adopted by major vendors and many… Read Full Bio

Going Laptopless

by Tom Bittman  |  April 5, 2011  |  2 Comments

I’m a knowledge worker. I’m in Copenhagen, on business. My laptop is in Connecticut. And I’m OK with that.

arrow downNow let me preface this by saying as an analyst, I don’t cover client computing, or PCs or tablet computers. I’m writing this as Joe Knowledge Worker. Even so, I’m going to avoid using product brand names. I’m not promoting a specific product. But I am promoting a new way of getting things done.

I know I’m not the first to have this aha moment, and that’s a bit of a sore point with me. I still have a working 8080 system from the early 1970s. I bought IBM’s first PC when it came out. I bought IBM’s first laptop computer – the PC Convertible – in 1986 (and yes, still have it and it still works). I jumped on the Palm Pilot as soon as it was available. I consider myself an early adopter. When it comes to tablet computers, however, my son is the early adopter and the pioneer. He’s been using his tablet computer in high school for a year now, and trying to convince me that it would work for me, too. I didn’t see it then, but I do now.

I tried it, on two business trips. The first one, I pulled out the tablet computer and played a little with it. Still, I did most of my work on the laptop. Second trip, my laptop battery died on a flight. I wrote a complete research note on the tablet. Suddenly, work was getting done, and without a laptop.

I’m in love. I love the lo-ong battery life. I love the tactile user interface. I love the super-thin size and portability. These three are huge for a traveler.

There are trade-offs. A physical keyboard is helpful, but I’m finding that to be a non-issue, and possibly more of a rut than a need. A DVD player is nice to watch shows when away from home – but Netflix works just fine instead. A data warehouse on a hard disk is nice, but do I really need all of those files with me? Cloud storage works great when I’m connected – which is very often – and I have plenty of memory for offline files. Showing presentations? I have the adaptor, and it works perfectly.

I’m an inveterate planner and organizer. Spreadsheets and lists that used to live on my laptop don’t live there anymore. It’s all on the tablet. Frankly, at this point, there are only a few things that really require my laptop – and I’m working to reduce that, too.

So, I’m in Europe and away from the office for four days, and work has not stopped, and I’m not searching every airport for outlets to give my laptop a little more juice, and my backpack is extremely light (and probably unnecessary now), and I may actually do more “knowledge work” on my tablet computer on this trip than I would have with a laptop. And, of course, I’ve just posted my first blog entry from my tablet.

I’ve only had this device for about three weeks, but I suspect that bringing the laptop on trips will be the exception going forward. Not quite an early adopter – but I’m all in now.

2 Comments »

Category: Cloud Education Future of Infrastructure Industry Analyst     Tags: ,

The End of Server Growth?

by Tom Bittman  |  February 11, 2011  |  5 Comments

Will virtualization, multicore, and cloud computing trends send x86 architecture server and processor volumes down for the next decade? It certainly is a realistic scenario – and perhaps the most likely.

arrow downAt Gartner, we spend a lot of time trying to understand future scenarios, the likelihood of each, indicators that a scenario is likely to occur, impacts on our clients, and what our clients should do. We’ve studied the impact of virtualization on the server market since virtualization was first introduced <begin chest-thumping>and Gartner was the first firm to point out the negative ramifications of virtualization on server volumes<end chest-thumping>. But we’re getting to the moment of truth.

With the exception of the economic collapse in 2009, server volumes have been dependably growing for years. However, virtualization rates are hitting a point that the negative effect of virtualization on the server market are becoming unmistakable. Not in five years. Now.

2010 was a good year for servers – nearly 9 million were sold. My contention is that if virtualization didn’t exist, there would have been 13, or 14, or 15 million sold.

The engine of server market growth has been the growth of workloads. Since 2004, the compound annual growth rate (CAGR) in workloads has been about 16 percent. 2010 was certainly a much better year than that – but if you factor in the the volume decline in 2009, the growth in 2010 just exactly made up the difference.

If the workload CAGR remains steady, server volumes will start to decline in 2011, and we won’t see 2010’s volumes again in this decade.

The good thing – virtualization (and cloud computing) makes it easier and faster to deploy a workload, and that has a tendency to increase the workload CAGR. However, even accounting for faster workload growth, 2010 is either at or near the peak of server volumes for the next ten years.

However, if Moore’s Law is going to be driven by increasing amounts of cores, those cores are going to need VMs to leverage them. Multicore is going to drive higher virtualization densities, and even fewer servers.

What will it take to drive server volumes up? Low virtualization growth, high workload growth, low virtualization densities. A combination of factors that seems unlikely.

Bottom line – there are a number of realistic scenarios for server volumes in the next decade. Each scenario will drive different vendor behavior (and results), pricing, and end user strategies. But – anyone want to place a bet? I’m blogging it, so I’m placing mine right now.

5 Comments »

Category: Cloud Virtualization     Tags: , ,

Embracing the Blur

by Tom Bittman  |  February 9, 2011  |  4 Comments

We’re having an interesting discussion inside of Gartner (due credit to Neil MacDonald, Lydia Leong, Cameron Haight and David Cearley for the ideas in this post – I hope they post further on this). The concepts here aren’t new. For example, in 2004, I talked about “the walls coming down” between business, the data center and development. I wasn’t unique – others have discussed boundaries breaking down between different aspects of IT architecture for years. However, I’m not sure how many people are aware of how utterly pervasive this megatrend in IT really is, and how much it affects all of us. In a word, the megatrend is "blur." Think about it.

  • blurWhatever happened to the market where there were distinct servers, storage, and networks? Fabric is blurring that.
  • What the heck is an operating system any more, and what does it matter when I have a virtual pool of distributed resources I need to use?
  • Whatever happened to the boundary between consumer technology and enterprise technology? Consumerization of IT. And not just personal technology devices – some IT services are given away for free (and subsidized by advertising). Which leads to boundaries disappearing in business models.
  • Whatever happened to the boundary between outsourcing and insourcing? Now we have cloud computing: public, private, hybrid, and every other variation. Looking for a black and white definition of cloud computing? A waste of time – it’s gray!
  • What about ownership of intellectual property? Open source, community collaboration. Is it plagiarism if you add value to existing content? In a society of information, can you afford not to build on what’s already out there? What should 21st century students do?
  • What about the boundary between trusted enterprise data and untrusted data? Can we really afford to ignore any business information that might be useful? Isn’t it about what we do with the data, rather than whether the data is 100% trusted and owned by the enterprise? The boundaries of data used for business intelligence have been blown completely down. For that matter, we are entering a period of data overload – some we can trust, some we partially trust, some that is impartial, some that is partial. Successful people and businesses will be able to find value in that data. Unsuccessful people and businesses will drown in the data, or hide from it.
  • Whatever happened to the boundary between IT and the business? In some cases, being solidified in the form of services-orientation (e.g., cloud computing), in other cases, the boundary simply does not exist. How many business people can afford to be laggards in leveraging the latest IT capabilities? How many IT personnel can ignore business strategy?
  • What about the boundary between applications and operations – and security, for that matter? It used to be that developers threw their creations over the wall for operations to run, with a kiss “good luck”. New applications are being written based on operational models, with automated deployment/operations/optimization in mind. Security is being captured as policy that moves with the application.

Virtualization. Consumerization. Cloud. Instant connections and collaboration. I could go on.

An overall IT megatrend today is a complete and utter blurring of boundaries – which we could handle conceptually, but it directly affects people and market competition. It’s a lot harder to re-skill, re-organize, and react to partners that become competitors and competitors that become partners and partners who are also competitors depending on the situation.

If there is one “skill” that is critical for an enterprise to have, and for individuals to have who use and/or help deliver IT capabilities (which, by the way, is everyone) – it’s “agility.” If you depend on the predictability of competition, and the predictability of a job category, you’re not gonna make it. You or your company will become noncompetitive faster than you can say “blur.”

To use Neil MacDonald’s perfect phrase, success requires “Embracing the Blur.”

(By the way, Neil has pointed out an interesting book by Stan Davis, called – not surprisingly – “Blur.” I need to take a look!)

4 Comments »

Category: Agility Cloud Education Future of Infrastructure Virtualization     Tags: ,

Economies of Fail

by Tom Bittman  |  December 7, 2010  |  2 Comments

Interesting discussions here at Gartner’s Data Center Conference in Las Vegas. While discussing the importance of economies of scale to cloud providers, I pointed out that economies of scale is a double-edged sword.

cardsWhile enterprises tend to have many (often hundreds, or even thousands) IT services that they provide, cloud providers tend to have only one, or a handful, but provided on huge scale. Standardization makes automation much easier, and certainly makes economies of large scale very attractive. But what happens when a “service” suffers a decline in demand? For an enterprise, diversification makes this much less of an issue – usually, a decline in one “service” will be made up by growth in another. The capital expense risk is real, but not huge. But what about a cloud provider that focuses on just that service?

Economies of fail.

Megaproviders in the cloud are not immune to economic declines, or changing demand. One of the benefits of cloud computing for end users is transferring their own capital risk to cloud providers. Doesn’t this sound an awful lot like the mortgage crisis in the U.S.?

For cloud providers to be successful, they must protect themselves. As much as possible they must find corollary markets for their services that are not directly related to their core service market – without abandoning the simplification and standardization that enables automation and economies of scale.

Potential customers of cloud providers should be very aware of a cloud provider’s business risk, and protect themselves. Cloud provider resiliency, market diversification and stability should be selection criteria. Remember: a provider cannot be too big to fail – in fact, some providers might become so big and so focused that failure is inevitable.

2 Comments »

Category: Cloud     Tags: ,

Virtualization Then & Now: Symposium 2009-2010

by Tom Bittman  |  October 18, 2010  |  18 Comments

fountMy first presentation at Symposium 2010 was Server Virtualization: From Virtual Machines to Private Clouds.” Attendance was crazy – the large room was packed, people were standing at the back, and apparently a few dozen were turned away at the door. This proves that server virtualization is not only a hot topic, it’s getting hotter right now (one stat I mentioned was that more virtual machines would be deployed during 2011 than 2001 through 2009 combined).

I started the presentation with some fundamental changes in server virtualization since I presented a year ago.

1) Virtual machine penetration has increased 50% in the last year. We believe that nearly 30% of all workloads running on x86 architecture servers are now running on virtual machines.

2) Midsized enterprises rule. For the first time, the penetration of virtualization in midsized enterprises (100-999 employees) now exceeds that of the global 1000 (or it will before year-end). There has been a HUGE uptake in the last year. Also, unlike large enterprises, midsized enterprises tend to deploy all at once – with outside help.

3) Hyper-V is under-performing. Maybe my expectations were too high, but Hyper-V has not grabbed as much market share as I was predicting. I especially thought that Microsoft would be the big beneficiary of midmarket virtualization. Surveys show otherwise – VMware is doing pretty well there. Here’s a theory. Clients repeatedly told us that live migration was a big hole in Microsoft’s offering – even for midmarket customers (to reduce planned downtime managing the parent OS). Microsoft’s Hyper-V R2 (with live migration) came out 8/2009. Was that too late? Did the economy put pressure on midsized enterprises to virtualize early, before Hyper-V R2 was proven in the market? Or did VMware just have too much mindshare?

VMware’s competition is growing (especially Microsoft, Citrix and Oracle), but VMware is still capturing plenty of new customers.

4) Private clouds are the buzz. Every major vendor on the planet who sells infrastructure stuff has a private cloud story today. In the last year, the marketing, product announcements and acquisitions have been mind-numbing. Some of this is clearly cloudwashing (“old stuff, new name”), but we’ve seen a number of smart start-ups captured by big vendors, and important product rollouts (notably VMware’s vCloud Director). Now the question is – what will the market buy?

5) IaaS Providers Shifting to Commercial VMs. IaaS (infrastructure as a service) providers have focused on open source and internal technologies to deliver solutions at the lowest possible cost. But that’s changing. In the past year, there’s been a rapidly growing trend for IaaS providers to add support for major commercial VM formats – especially VMware, but also Hyper-V and XenServer. The reason? To create an easy on-ramp for enterprises. As enteprises virtualize (and in many cases, build private clouds), the IaaS providers know that they need to make interoperability, hybrid, overdrafting, migration as easy as possible. The question is whether that will require commercial offerings (such as VMware’s vCloud Datacenter Services, or Microsoft Dynamic Datacenter Alliance), or if conversion tools will be good enough. I tend to think that service providers better make the off-premises experience as identical to the on-premises experience as possible – and I’m not sure conversion will get them there.

18 Comments »

Category: Cloud Virtualization     Tags: , , , , , ,

The Buzz at Gartner’s Symposium 2010: Cloud!

by Tom Bittman  |  October 18, 2010  |  1 Comment

pleaseGartner’s Symposium this year is a blow-out – more than 7,500 attendees, and more than 1,600 CIOs. That means a very busy week of presentations and one-on-ones. As an analyst, what I always find interesting is “the buzz”. You get a real good sense of what’s hot based on one-on-one load, and one-on-one topics. I was one of a few analysts fully booked a few weeks before Symposium, so my topics are hot. The questions? Continued interest in virtualization, but shifting heavily to cloud computing, both private and public.

Because of presentations, roundtables and so forth, I only had 35 one-on-one slots available. 11 of those are on virtualization (mostly VMware and Microsoft). 9 are about cloud computing (mainly what’s ready, which services, which providers, customer experiences). 14 are about private cloud (how do I start, VMware’s vCloud, etc.).

The sense I get so far is the interest in cloud computing continues to grow, but there is more real activity and near-term spending on private cloud solutions. A lot of interest in VMware’s vCloud – but attendees want some proof first.

At the end of the week, I’ll summarize what I learned. Should be a great week!

1 Comment »

Category: Cloud Virtualization     Tags: , , ,

IT Operations: From Day-Care to University

by Tom Bittman  |  May 24, 2010  |  2 Comments

After spending the day discussing IT operations, here are some musings on the future of IT ops.

Traditionally, IT ops has been responsible for managing operationally "dumb" applications. These legacy applications are like infants – they need constant care and feeding. They can’t take care of themselves, and they rely entirely on others to survive. Actually, these dumb applications are even less capable than infants – at least infants cry when they’re hungry!

IT operations today is like day-care. Every infant is different, has different needs, signals their needs in different ways. There’s not much economies of scale here at all. Not a lot that can be automated. And new infants are being added daily!

There are three major paths for IT operations in the future – and each of them is very different:

(1) The Day-Care for Clones: Limit IT operations to management of a single (or small number of) applications. Knowing exactly how these applications work allows you to custom design IT operations/automation to their needs. This is what cloud providers typically do today, and application-centric environments (around Oracle, for example).

(2) The Smart Day-Care: The effort for years has been to make the day-care smarter, more adaptive, more on-demand. This has been a huge challenge, and will continue to be a huge challenge. One new concept has been the introduction of virtual machines, that can be used to encapsulate workloads – which doesn’t solve the problem, but it does enable more automation. Ideally, you still want to have metadata about what’s inside the virtual machine, which can describe service topology, security requirements, even service level requirements.

(3) The University: Expect more from the applications. They need to manage themselves, describe their requirements. They don’t "trust" infrastructure at all – if there are failures, the application is designed to be resilient and extremely self-reliant. On the other hand, IT operations still has a role. With "smart" applications, IT operations can’t necessarily trust them. The role of IT operations is to set constraints, manage the amount of resource that can be used, monitor behavior, look for changes in behavior.

The issue in IT operations is that these three paths are each viable, but each has very different skill, architecture, process, and management tool requirements. This confusion will take place inside enterprise IT – managing a mixed bag of “dumb” applications, “smart” applications, management of virtual machines, private clouds, and public clouds. Get ready for a bumpy ride!

2 Comments »

Category: Cloud Future of Infrastructure     Tags: , ,

Clarifying Private Cloud Computing

by Tom Bittman  |  May 18, 2010  |  33 Comments

I continue to talk with clients who understand the concept of private cloud computing, they think they know it when they see it, but they can’t quite explain it in words. A year ago I described The Spectrum of Private to Public Cloud Services, but I didn’t put that in the form of a definition. Here’s a shot.

Gartner’s official definition of cloud computing is “A style of computing where scalable and elastic IT-enabled capabilities are delivered as a service to customers using Internet technologies.” We also describe five defining attributes of cloud computing: service-based, scalable and elastic, shared, metered by use, uses Internet technologies. A key to cloud computing is an opaque boundary between the customer and the provider. Graphically, that looks like this:

image

When the customer does not see the implementation behind the boundary, and the provider doesn’t care who the customer is, you have a public cloud service. So what is private cloud?

Private cloud is “A form of cloud computing where service access is limited or the customer has some control/ownership of the service implementation.”

Graphically, that means that either the provider tunnels through that opaque boundary and limits service access (e.g., to a specific set of people, enterprise or enterprises), or the customer tunnels through that opaque boundary through ownership or control of the implementation (e.g., specifying implementation details, limiting hardware/software sharing). Note that control/ownership is not the same as setting service levels – these are specific to the implementation, and not even visible through the service.

image

The ultimate example would be enterprise IT, building a private cloud service used only by its enterprise. But there are many other examples, such as a virtual private cloud (the same as the example above, except replace ‘enterprise IT’ with ‘third-party provider’), and community clouds (the same as a virtual private cloud, except opened up to a specific and limited set of different enterprises).

Still “foggy”, or is it “clear”?

33 Comments »

Category: Cloud     Tags: ,

Polling Data on Public/Private Cloud Computing

by Tom Bittman  |  April 21, 2010  |  15 Comments

chickenclouds2I’ve been looking for an excuse to use this cartoon – I finally found it!

I’m finishing a research note on some polls I took recently of data center executives, managers and decision-makers. Interesting results. Here’s a summary:

(1) The first poll was focused on the top three concerns that data center professionals have with public cloud computing. The weighted score for “Security and Privacy” was more than the score for the next three concerns combined. Sometimes, when it looks like a meteor, it is a meteor (see, I got the cartoon in here)!

(2) The next two polls focused on public cloud computing plans versus private cloud computing plans. Three-fourths said that they were or would be pursuing a private cloud computing strategy by 2012 (only 4% said they weren’t). Three-fourths said that they would invest more in private cloud computing than in public cloud computing through 2012. Hype plays a part here, but we continue to believe that IT organizations will spend more money on private than on public cloud computing through at least 2012.

(3) The final poll focused on challenges with private cloud computing. “Technology” was considered sixth out of seven challenges offered. “Management and Operational Processes” came in first, closely followed by “Funding/Chargeback Model.” Process, people and relationship changes will be bigger challenges with private cloud computing than technology.

Once again, thanks to Doug Savage for allowing me to use one of his cartoons (check out the others on his site).

15 Comments »

Category: Cloud     Tags: ,

The Private Cloud Sandbox

by Tom Bittman  |  April 16, 2010  |  9 Comments

sandbox in a cloudPrivate cloud computing is rapidly moving up the Gartner hype cycle. In terms of raw market hype, I think we’ll peak late this year. VMware’s “Redwood” won’t be the only announcement – every major infrastructure vendor in the planet will likely put “private cloud” in their announcements, their marketing, their product names.

So before we get too overwhelmed with private cloud computing mania, what’s going to be real, and what isn’t? How will private cloud computing be used?

Just like early virtualization deployments, development and test is the favorite starting point for private cloud computing. Take out the middle-man, and provide a self-service portal for developers to acquire resources. Manage the life cycle of those resources, and return them to the pool when the developer is done. Dev/test is a perfect starting point, because there is a need for rapid provisioning and de-provisioning.

What’s next?

I think the next logical place will be the computing sandbox. This is a place for production workloads that need to be put up quickly – a stand-alone web server, a short-running computational task, a pilot project. “I need it NOW.”

The sandbox will especially be the place to put a workload prior to full production deployment internally, but when it needs to go up fast – and when external deployment (in the “public cloud”) isn’t appropriate for one reason or another.

Sandboxes can have different operational rules than normal production workloads. For example, perhaps it is a short-term “lease” and expires after thirty days. Perhaps the software is never maintained or patched during that window. Perhaps there is no backup or disaster recovery in place for those workloads. Perhaps security coverage is limited.

While a workload is running in a sandbox, the administrivia required to get appropriate approvals and fulfill organizational process requirements can be finished in parallel.

Ideally, after some period of time (like at the end of a thirty day lease), there might be a way to move the workload from the sandbox to full production, with all of the service level requirements in place.

Many large organizations will start with dev/test first, and build a sandbox next. I believe for many organizations the sandbox itself will mature and become a broader and more capable private cloud service. But there’s no rush.

9 Comments »

Category: Cloud Virtualization     Tags: , , ,