Cameron Haight

A member of the Gartner Blog Network

Cameron Haight
Research VP
10 years at Gartner
30 years IT industry

Cameron Haight is a research vice president in Gartner Research. His primary research focus is on the management of server virtualization and emerging cloud computing environments. Included in this effort is… Read Full Bio

Dear APM vendor

by Cameron Haight  |  February 10, 2015  |  5 Comments

Dear APM vendor:

As I step back into covering the application performance monitoring market, I’d like to try to optimize the time during our discussions (particularly briefings). So with that in mind, I offer the following suggestions for your consideration:

  • Please don’t just iterate through (multiple pages of) the litany of problems that exist within the industry (and that you intend to solve). After many years within IT, I’d be a pretty poor analyst if I didn’t already know at least most of them (and besides, I probably created a few of them back in my pre-Gartner days). If there’s something new that you think is compelling that no one else seems to have observed (or perhaps if I’m missing the clue train), okay, but otherwise let’s skip to the meat of the discussion.
  • Please don’t just show me pretty charts and pictures. Everyone does this and quite candidly, with some rare exceptions, they all tend to fade into mental obscurity – even the “cool” ones. As I mentioned in my earlier blog posting, I’m interested on how APM can improve situation awareness. Thus, when you present some data-oriented aspect of your user interface (and I do like to see them), tell me who you think this information is valuable to, and as a consequence of displaying this, what actions you expect them to be able to perform (or not have to perform) as a result. This helps me to better understand not only the targeted roles and use cases, but also how work “flows” within your product.
  • Please don’t just provide a laundry list of features. I’m much more interested in how the individual capabilities of your technology interact to provide a superior experience than if I were to just go out and procure them independently from different technology providers. This is important because there is a constant pendulum within the industry that swings between integrating best-of-breed versus purchasing all-in-one with the trend for now seeming to be more the former than the latter because of developments like DevOps.
  • Please don’t just tell me about your architecture. Tell me how you’ve perhaps designed your architecture to adapt to things that we might not even be thinking about today (the famous “unknown unknowns”). In other words, as a consumer, am I going to reach a Y2K moment with your technology? A great example that shows the principles that are sought in architectures such as those at Amazon and elsewhere can be found in an old video by Dr. Werner Vogels of Amazon.
  • Please don’t just tell me about the wins. Everybody highlights these and rarely (although it happens) are we given a poor reference. Instead, tell me also why you may have lost at an account. Whoa! Why in the heck would you want to do this? First, we’ve probably heard about the loss because it’s someone else’s win so this gives you an opportunity to provide another view on the situation. Second, and perhaps most important, it helps us to assess how you are learning and adapting from these experiences. As we read in The Lean Startup, it’s (a) key to viability. By the way, IMHO, you don’t have a “real” reference unless the slide also provides the name and contact information of the user.
  • Please don’t just provide an obligatory page on partners. I’ve seen enough of these slides to know that many, if not most, of the partners on the slide are in name only. Instead, tell me which partners have tied some degree of their success to yours in terms of money or some other resource that has been committed.
  • Please don’t just provide me revenues or numbers of customers. Instead, help me to understand your sales motion and how you are creating long-lasting relationships. This is especially important with SaaS-based models where the switching costs are often assumed to be low.
  • Please don’t just tell me who you are – tell me “why” you are (another oldie but goodie video this time from Simon Sinek). I tweeted the other day that most companies don’t know their “why?” Why is this important (sorry for the pun)? Because this industry is notoriously short-term-oriented. Many companies will have been on a rocketship ride and either IPO’d or found a tempting embrace from a larger vendor. And then … well, it’s often hard to hold together the institutional values that intially made you successful when facing quarterly earnings pressure or you are now operating within the confines of some software leviathan. It happens over and over. Help me to understand how you expect your culture to outlive the founders getting wealthy, and hence, you’ll future proof the investments of your customers.
  • Finally, please don’t make it the proverbial death by PowerPoint. Something like the Guy Kawasaki 10/20/30 rule seems appropriate. If you have more to present, then think about breaking it up into several sessions if possible (I know, it’s not always easy to schedule more discussions due to demand, but again, it’s ultimately about having an effective meeting). When constructing the presentation, also think about the three or so main points that, in the flurry of other vendor interactions, that I am most likely to keep in my “cache” memory. While I do make it a point to take voluminous notes in any discussion, I may not always have the opportunity to refer to them in subsequent end user client interactions.

By the way, I am not trying to appear to be an arrogant analyst by providing these suggestions. I want to make sure that we have a fruitful discussion where we both get value. Many thanks, and I look forward to our future APM interactions!

5 Comments »

Category: Uncategorized     Tags:

Back to the Future

by Cameron Haight  |  February 3, 2015  |  2 Comments

As some of you know by now, my colleague Jonah Kowall (@jkowall) has announced that he is leaving Gartner for an opportunity in the vendor community. When this was announced internally, I was approached by my team manager to see if I might be interested in taking Jonah’s place in covering APM as I used to cover an early version of this area years before and so I was a logical candidate.

My first thought was that Jonah would be a hard act to follow as he’s done an exemplary job of covering the topic and thus there were big shoes to fill. My second thought was that I was really enjoying working with companies in transforming to a DevOps and Web-scale IT culture and wasn’t sure if covering APM would allow me to have the same impact. After a few days of thinking it over, I decided to “volunteer” to co-cover APM with my colleague Will Cappelli (who supports APM in EMEA and is also focused on IT Operations Analytics or ITOA).

So, it’s reasonable to ask what made me decide to say yes (to an offer that I could in fact refuse – Gartner is great about that). To be honest, as I surveyed the market, I didn’t feel like I was that out of touch and I thought that I could step in fairly quickly and begin to assist our clients (note: I hope this is not a case of too much hubris). Sure, when I used to cover the precursor to APM (I developed the first Magic Quadrant for J2EE Application Server Management in 2003), the lapels on our suits were broader, but when I was reviewing the industry to help influence my decision, I didn’t feel like that scene in the movie Somewhere in Time where Christopher Reeve shows up in clothes that were decidedly out of date for the era.

In other words, the technologies while more advanced did not seem at least to me to be a quantum leap beyond what was available a decade or more ago. Prettier faces or UIs for sure … and some improved install mechanisms, but with the possible exception of the use of more insightful analytics, nothing screamed radically new (and from my perspective, SaaS is largely about improving the delivery mechanism, not the functionality – at least in many of today’s offerings). And that’s a problem I think. With that as a basis, I’d to share with you some additional thoughts that I have on APM:

  • Situation awareness – I think a criticism of the analyst community (that I have been guilty of myself) is that we are often held captive by the shiny object syndrome. We’re geeks – it’s probably why most of us are in this industry, but, we need to look beyond the technology “porn.” More specifically, while the technologies are in some cases getting better, you can make I think a claim that our overall situation awareness may not be. And this goes beyond not just applying analytics to deal with the digital tsunami but also how to better understand how we process information (cognition) as well as looking at ways to improve usability through improvements in visualization, workplace design, etc. In my research, I hope to go outside the IT domain to see how operators in other mission critical areas are improving their sensemaking to help them in their day to day activities.
  • Work roles and processes – One of the reasons that I gravitated to covering DevOps in 2010 here at Gartner was because I didn’t see our clients getting any better using frameworks like ITIL. Process complexity seemed to be increasing (and not decreasing as I thought it should because we were certainly getting better with all of the money we were throwing at it – weren’t we), and this approach was becoming a cash cow for consultants. So, with this in mind, I want to ask how is APM improving the monitoring “process?” As a consequence of our investments, what are we no longer having to do? Related to this is the question of how is APM improving systems administration, operations and development collaboration? There is, I think, a lot of work that can be done better understanding the ethnography of the individuals charged with the monitoring and management of our IT environments.
  • Parochialism – I’m not sure that this is the correct word, but what I’m suggesting is that it seems that we in the IT operations world are guilty of the “if the only tool that you have is a hammer, to treat everything as if it were a nail” perspective. In other words, we often seem to say that the only way to fix an application problem is to buy more APM tools. We don’t seem to reflect much on how changes in the application architecture can result in a (potentially) reduced need for lots of expensive add-on management technology – at least in the long run. In the DevOps world, feedback loops are critical and we need to do a better job of suggesting how to improve the application from the metrics that we are collecting. In essence, we need to become better systems thinkers as philosophies such as DevOps are recommending.
  • Billions and billions – Astronomer Carl Sagan liked to use big numbers when talking about the cosmos to put its size in context. Closer to home, there are seemingly near equally big numbers when it comes to things like microservices, mobile applications, sensors (IoT) and so on. How can we possibly manage these using management architectures that are not that far removed from their traditional client/server ancestors? Perhaps a better question is do we need to manage them – or will more and more intelligence have to be built in to have a hope of being able to manage exponentially larger numbers of things (including applications) than we have to date so far?

These are just a few of the things that I’d like at least some of my future APM research to focus on. Of course, there will be lots of continuing detail on the APM technology providers themselves in upcoming Gartner deliverables. In the meantime, let me know if you think I’m missing something and best wishes to Jonah on his new journey!

2 Comments »

Category: Applications     Tags:

Re-visiting The Phoenix Project

by Cameron Haight  |  January 14, 2015  |  Comments Off

Last year, I created a list of books that I really wanted to read. Today, that list numbers over 90 books that cover a wide range of IT-related topics. I had already crossed off The Phoenix Project as I read it shortly after it’s publication, but the novel is one of those rare books that I think you need to read more than once because of all of the “awesomeness” that it contains. So, over the recent holidays I took time in-between all of the festivities to read it for a second time.

When I first read the book, my initial hero was Bill Palmer, who as the acting VP of IT Operations, righted a sinking ship and demonstrated how critical a well oiled operations team was to the business. This, of course, shows my bias since I have been operationally-focused in some fashion throughout most of my career and as a community, we all seem to cheer when operations is viewed in a positive light.

Upon the second reading, however, my perception changed. Bill is still the key focus of the story and is still deserving of the majority of the kudos, but the person that I subsequently developed the most respect for was John Pesche – the Parts Unlimited CISO. Of all of the characters in the book, he had to change the most IMHO. He had to internally challenge years of security culture training and conventional wisdom and come to the conclusion that there had to be a better way – a way that was focused not on just protecting the business, but more importantly, enabling it. That’s a tough thing to do in any type of job but perhaps even more so today within the security and audit realms where the threats and the regulations seem to multiply every day.

I’m starting to see more of this behavior in the clients that I interact with. One particular financial industry organization that I’m familiar with constantly works with and encourages their audit (and security) teams to be business outcome-, and not process-focused. There is not only the constant re-assessment of the types of necessary controls (and processes), but also the extent of their applied scope.

For Gartner clients, my colleague (and one of The Phoenix Project’s co-authors) George Spafford and I wrote on DevOps and compliance last year here. In addition, Gene Kim, James DeLuccia (of Ernst and Young) and some others have put out together a DevOps Audit Defence Toolkit that is another valuable addition to the discussion. Now, on to the next book on my list!

Comments Off

Category: Uncategorized     Tags:

Getting Your Arms Around DevOps – DevOps Patterns and Practices

by Cameron Haight  |  October 13, 2014  |  7 Comments

I receive lots of inquiries from our clients around the subject of “what is DevOps?” Their interest goes beyond needing a simple definition to wanting to know what constitutes it. After much searching, I couldn’t find exactly what I was looking for so I developed the graphic below awhile ago that I now use in some of my presentations. I’ve shared this with several of my Gartner colleagues as well as a few notable DevOps thought leaders. For example, Andrew Clay Shafer (twitter.com/littleidea) thinks a Euler diagram (en.wikipedia.org/wiki/Euler_diagram) might be a better representation, however, I’m not sure that this shows dependencies as well (although it certainly would make for a cleaner depiction). In any case, feel free to let me know what you think – I’m certainly open to suggestions. Thanks …

Update: I’m re-posting with a larger graphic that will hopefully help.

Image

7 Comments »

Category: Uncategorized     Tags:

Open BigCloud Symposium and OCP Workshop 2014

by Cameron Haight  |  May 4, 2014  |  2 Comments

The University of Texas at San Antonio will be hosting the first annual Open BigCloud Symposium and OCP Workshop later this week. I’ll be there talking about the impact of DevOps. Good to see academia partnering with industry to advance cloud, Big Data and DevOps concepts.

2 Comments »

Category: Uncategorized     Tags:

UTSA Open Compute Lab

by Cameron Haight  |  February 28, 2014  |  2 Comments

Last week, as part of my Web-scale IT research, I visited the Cloud and Big Data Laboratory at the University of Texas at San Antonio (UTSA). I met with Paul Rad (Director, The University of Texas at San Antonio Cloud and Big Data Laboratory and Vice President of Open Research at Rackspace), Professor Rajendra Boppana (Interim Chair of Computer Science Department), Carlos Cardenas (Manager at Cloud and Big Data Laboratory, CS Dept at UTSA) and Joel Wineland (Chief Technologist, IT infrastructure and Open Compute Solutions, RGS).

Image

The organization is devoted to the research of new technologies and innovations in various areas of computing such as OpenStack and Software Defined Networks (SDN). However recently, it also became one of the two labs in the world where certification of Open Compute technology is performed (the other is in Taiwan). What is different from say testing groups within vendors is that the goal is to go deep or “full stack.” Thus the focus is on working closely with large enterprises in specific industries, i.e., financial services, healthcare/bioinformatics and energy to identify and certify key workloads (such as Monte Carlo simulation for trading organizations) that would make good candidates for scale-out, OCP environments.

Image

While San Antonio is also the home of Rackspace who is one of the key companies supporting the Open Compute Project, another key reason that a university was selected as the site for this process was to develop a curriculum in support of the engineering and computer science schools involving technology hardware and software development. UTSA not only wants to be a place for its students to become familiar with critical open technologies, but to also act as an incubator for the development of new technologies and maybe even new companies.

Image

In the future, there is interest in hosting a symposium to bring together technology vendors, commercial enterprises and other universities in order to expand the adoption of an Open Compute direction. More on this later.

2 Comments »

Category: Cloud     Tags:

All in with Web-scale IT

by Cameron Haight  |  August 21, 2013  |  Comments Off

A few months ago, I blogged on a concept that at Gartner we had begun calling Web-scale IT. Yesterday I published a new note on Web-scale IT called “The Long-Term Impact of Web-Scale IT Will Be Dramatic.” In this document, I’ve burned all of the boats. No more analyst nuance that says “it depends.” Why? Because I’ve seen the impact of conventional wisdom on IT (and especially on IT operations which is my direct area of coverage) and frankly it isn’t delivering. Maturity levels are “stuck” and costs, even with the advent of cloud, remain too high. If enterprise IT wants to remain relevant to the business, then they’ll have to rethink the entire IT “value chain.”

So what’s the model to follow? Look to the large cloud services providers such as Google, Facebook, Amazon, etc. In other words, it’s not the cloud per se that will save enterprise IT, but thinking and acting like the major cloud providers across all the key dimensions of IT that will. And I’m saying this even with the knowledge of outages by several of these firms over the past week. Why? Because given the innovative nature of these organizations, the “system” will learn and get even better. This is the theme behind Nassim Taleb’s book Antifragile: Things That Gain from Disorder. Antifragile systems aren’t perfect but they are designed to get better when stressed. And the systems (and by systems I mean not just the technology, but the people, processes and even culture) at these firms get stressed every day, so even when they fail, it winds up being a “win.”

I usually receive several objections when I talk to enterprises about copying the Web-scale IT methods found in these leading organizations, but let me focus on two. The first is related to size, in the context of “we’ll never be as big as Google so why is this concept relevant to us?” There are many responses to this but the one that I prefer to use is that while the term “Web-scale IT” implies scale in terms of size, there is also a scale in terms of speed. I tell my clients that there is nothing stopping them from being as agile as these organizations. Indeed, the much smaller sizes of many enterprises might actually enable them to achieve higher IT velocity than their larger cloud counterparts.

The second objection is often based on a comparison of businesses and business models. I hate to say it, but there’s often an underpinning of some smugness because the subtle message is that the enterprise is a “real business” (like a financial services firm or a manufacturer) and not one that provides searches, enables social networking or delivers videos. There are some estimates that the recent five minute outage at Google cost the company around $ 545,000. I don’t know about the real business implication, but that’s real money and I’m pretty sure that teams in Google are at work to try to see that it doesn’t happen again.

Now, let’s turn to the vendor community. While the impact on the enterprise of Web-scale IT will be large, the impact on the vendor community may well be even larger. Suppliers of traditional products and services will not go away, but over time they will largely lose their high degree of technology influence. Why? Because the enterprise IT leaders that “get it” and start to build-out their own Web-scale IT architectures will see that they no longer have to live with the historical trade-off of cost versus function. Open source hardware and software is no longer a compromise solution. In my area of IT operations technology, many open source management products are the equivalent if not better than their closed source alternatives (they’re certainly more innovative). And we’re starting to see the same thing in the hardware world through efforts like the Open Compute initiative from Facebook.

At Gartner, we’ll also have to make sure that we don’t rest on our laurels. While we have a business model that has withstood the test of time and technology, we will also have to continue to innovate in order to assist our enterprise clients as they begin their journey towards this new environment. Indeed, I have received substantial internal backing within the company to pursue the Web-scale IT concept and to see where it heads. One benefit already is that I’m working with analysts in other areas such as application architecture and server hardware design so that we can provide integrative guidance across the entire IT value chain.

I’ve burned my boats. Is anyone else prepared to leave the conventional IT world behind?

Comments Off

Category: Cloud Cloud management IT management     Tags:

Why are vendors in the IT operations management market?

by Cameron Haight  |  July 15, 2013  |  2 Comments

The other day, my colleague George Spafford emailed a link to me on a 2009 TED Talk by Simon Sinek on “How great leaders inspire action.” Even through the video is now several years old, the argument that Simon makes is timeless. The key graphic in his presentation is re-created below:

Sinek

Sinek states that most organizations know what they do and also how they do it (i.e., perhaps through some value differentiating process, etc.). He argues, however, that few organizations know why they do what they do (beyond of course just generating a profit). From a strategy perspective, these conventional organizations operate from outside in with respect to his model which Sinek calls “The Golden Circle.”

What Sinek argues is that great, inspiring organizations and people (he cites Apple Computer, the Wright Brothers and Dr. Martin Luther King) think, act and operate the same way which is the opposite from everyone else. They start from the inside of the model or “why” because they have a purpose, a belief or a cause that is the DNA embedded in everything that they do, from the products that they build, to the marketing messages that they develop and ultimately to the people that they hire.

As I watched the video, I started to ask myself if I could think of any IT operations management companies that could be included in the list of inspiring entities that Sinek cites? The results that came back to me were pretty sparse. Many were companies that were early movers in the DevOps arena with their almost evangelical zeal for their mission to change the conventional IT operations environment. And one or two others pre-dated the DevOps rise with their own edgy approaches. I wonder, however, how many of these firms will manage to retain the quality that identifies them over a decade or more like an Apple?

I have a personal example of this. Prior to coming to Gartner, I worked at BMC Software in a variety of roles. Coming from IBM, it was quite a shock to me. With FY 1993 sales of almost $ 240 million based in of all places Houston, the company was 13 years old (and had been public for 5 of those) and yet still operated very much like a Silicon Valley start-up. Developers walked around in jeans, shorts and flip flops wearing tee-shirts that said “Best Little Software House in Texas” on them. The company had “product authors” that designed software that did amazing things in mainframes – things that even the inventor of the MVS operating system had difficulty replicating during that era. Sales was unconventional for that time as well – it was rare for an account rep to actually visit their customer as most interaction was done over the phone leveraging a “try and buy” model which was even more amazing when you consider that many software licenses regularly cost over $ 100,000.

But today we see a company that has been a perennial member of the “Big Four” in IT operations management now in the process of going private. There are many possible reasons for this but I like to think it has a lot to do with the issue surrounding Simon Sinek’s question of “why.” And to be clear, this issue is not specific to BMC, but can also found in almost all of their competitors, i.e., CA Technologies, HP, IBM, etc., as well as numerous other large and small firms in the management industry. Why do they exist and by extension, what is their dream? How they answer this will help us to better understand their future prospects.

2 Comments »

Category: IT management     Tags: , , , ,

Enter Web-scale IT

by Cameron Haight  |  May 16, 2013  |  1 Comment

 

In a research note that was published yesterday, Gartner introduced the term “web-scale IT.”  What is web-scale IT?  It’s our effort to describe all of the things happening at large cloud services firms such as Google, Amazon, Rackspace, Netflix, Facebook, etc., that enables them to achieve extreme levels of service delivery as compared to many of their enterprise counterparts.

In the note, we identify six elements to the web-scale recipe: industrial data centers, web-oriented architectures, programmable management, agile processes, a collaborative organization style and a learning culture.  The last few items are normally discussed in the context of DevOps, but we saw a need to expand the perspective to include the changes being made with respect to infrastructure and applications that act to compliment DevOps capabilities.  So, we’re not trying to minimize DevOps in any way because we view it as essential to “running with the big dogs,” but we’re also saying that there’s more that needs to be done with respect to the underlying technology to optimize end-to-end agility.

In addition, while the term “scale” usually refers to size, we’re not suggesting that only large enterprises can benefit.  Another scale “attribute” is speed and so we’re stating that even smaller firms (or departments within larger IT organizations) can still find benefit to a web-scale IT approach. Agility has no size correlation so even more modestly-sized organizations can achieve some of the capabilities of an Amazon, etc., provided that they are willing to question conventional wisdom where needed.

Web-scale IT is not one size fits all as we don’t want to replace one IT dogma with another. In true pace-layered fashion, use the approach to IT service delivery that works best for your customers. Gartner suggests that so-called “systems of innovation” which are applications and services needing high rates of change are the more likely initial candidates, but IT organizations are urged to experiment to see what makes sense for them.

Stay tuned for more on web-scale IT from Gartner in the future! 

1 Comment »

Category: IT management     Tags: , , ,

DevOps and Monitoring

by Cameron Haight  |  November 8, 2011  |  1 Comment

Much of the discussion around tooling in the DevOps environment focuses on automation of the configuration and release management processes.  Somewhat lower on the radar screen are tools (and processes) for the monitoring of an evolving IT environment.image   This excellent presentation by Patrick Debois, Israel Gat and Andrew Shafer talks about how the metrics that are collected through the monitoring process have to change.  The tools, however, may also be morphing as well.  “Big management data” will likely lead to a change in the architecture of the technologies that we use for monitoring. Established companies like Splunk have of course been trailblazers in this regard, but others are also starting to arise.  Boundary is another company which recently came out of stealth mode to potentially keep your eye on.  I’m working now on a research note detailing some of the non-conventional wisdom that has accrued when it comes to DevOps-oriented monitoring. I plan to follow this up with some additional writing on how management architectures will increasingly look like their business application peers in the big data world so stay tuned. 

1 Comment »

Category: Uncategorized     Tags: