by Cameron Haight | January 14, 2015 | Comments Off
Last year, I created a list of books that I really wanted to read. Today, that list numbers over 90 books that cover a wide range of IT-related topics. I had already crossed off The Phoenix Project as I read it shortly after it’s publication, but the novel is one of those rare books that I think you need to read more than once because of all of the “awesomeness” that it contains. So, over the recent holidays I took time in-between all of the festivities to read it for a second time.
When I first read the book, my initial hero was Bill Palmer, who as the acting VP of IT Operations, righted a sinking ship and demonstrated how critical a well oiled operations team was to the business. This, of course, shows my bias since I have been operationally-focused in some fashion throughout most of my career and as a community, we all seem to cheer when operations is viewed in a positive light.
Upon the second reading, however, my perception changed. Bill is still the key focus of the story and is still deserving of the majority of the kudos, but the person that I subsequently developed the most respect for was John Pesche – the Parts Unlimited CISO. Of all of the characters in the book, he had to change the most IMHO. He had to internally challenge years of security culture training and conventional wisdom and come to the conclusion that there had to be a better way – a way that was focused not on just protecting the business, but more importantly, enabling it. That’s a tough thing to do in any type of job but perhaps even more so today within the security and audit realms where the threats and the regulations seem to multiply every day.
I’m starting to see more of this behavior in the clients that I interact with. One particular financial industry organization that I’m familiar with constantly works with and encourages their audit (and security) teams to be business outcome-, and not process-focused. There is not only the constant re-assessment of the types of necessary controls (and processes), but also the extent of their applied scope.
For Gartner clients, my colleague (and one of The Phoenix Project’s co-authors) George Spafford and I wrote on DevOps and compliance last year here. In addition, Gene Kim, James DeLuccia (of Ernst and Young) and some others have put out together a DevOps Audit Defence Toolkit that is another valuable addition to the discussion. Now, on to the next book on my list!
Category: Uncategorized Tags:
by Cameron Haight | October 13, 2014 | 7 Comments
I receive lots of inquiries from our clients around the subject of “what is DevOps?” Their interest goes beyond needing a simple definition to wanting to know what constitutes it. After much searching, I couldn’t find exactly what I was looking for so I developed the graphic below awhile ago that I now use in some of my presentations. I’ve shared this with several of my Gartner colleagues as well as a few notable DevOps thought leaders. For example, Andrew Clay Shafer (twitter.com/littleidea) thinks a Euler diagram (en.wikipedia.org/wiki/Euler_diagram) might be a better representation, however, I’m not sure that this shows dependencies as well (although it certainly would make for a cleaner depiction). In any case, feel free to let me know what you think – I’m certainly open to suggestions. Thanks …
Update: I’m re-posting with a larger graphic that will hopefully help.
Category: Uncategorized Tags:
by Cameron Haight | May 4, 2014 | 2 Comments
The University of Texas at San Antonio will be hosting the first annual Open BigCloud Symposium and OCP Workshop later this week. I’ll be there talking about the impact of DevOps. Good to see academia partnering with industry to advance cloud, Big Data and DevOps concepts.
Category: Uncategorized Tags:
by Cameron Haight | February 28, 2014 | 2 Comments
Last week, as part of my Web-scale IT research, I visited the Cloud and Big Data Laboratory at the University of Texas at San Antonio (UTSA). I met with Paul Rad (Director, The University of Texas at San Antonio Cloud and Big Data Laboratory and Vice President of Open Research at Rackspace), Professor Rajendra Boppana (Interim Chair of Computer Science Department), Carlos Cardenas (Manager at Cloud and Big Data Laboratory, CS Dept at UTSA) and Joel Wineland (Chief Technologist, IT infrastructure and Open Compute Solutions, RGS).
The organization is devoted to the research of new technologies and innovations in various areas of computing such as OpenStack and Software Defined Networks (SDN). However recently, it also became one of the two labs in the world where certification of Open Compute technology is performed (the other is in Taiwan). What is different from say testing groups within vendors is that the goal is to go deep or “full stack.” Thus the focus is on working closely with large enterprises in specific industries, i.e., financial services, healthcare/bioinformatics and energy to identify and certify key workloads (such as Monte Carlo simulation for trading organizations) that would make good candidates for scale-out, OCP environments.
While San Antonio is also the home of Rackspace who is one of the key companies supporting the Open Compute Project, another key reason that a university was selected as the site for this process was to develop a curriculum in support of the engineering and computer science schools involving technology hardware and software development. UTSA not only wants to be a place for its students to become familiar with critical open technologies, but to also act as an incubator for the development of new technologies and maybe even new companies.
In the future, there is interest in hosting a symposium to bring together technology vendors, commercial enterprises and other universities in order to expand the adoption of an Open Compute direction. More on this later.
Category: Cloud Tags:
by Cameron Haight | August 21, 2013 | Comments Off
A few months ago, I blogged on a concept that at Gartner we had begun calling Web-scale IT. Yesterday I published a new note on Web-scale IT called “The Long-Term Impact of Web-Scale IT Will Be Dramatic.” In this document, I’ve burned all of the boats. No more analyst nuance that says “it depends.” Why? Because I’ve seen the impact of conventional wisdom on IT (and especially on IT operations which is my direct area of coverage) and frankly it isn’t delivering. Maturity levels are “stuck” and costs, even with the advent of cloud, remain too high. If enterprise IT wants to remain relevant to the business, then they’ll have to rethink the entire IT “value chain.”
So what’s the model to follow? Look to the large cloud services providers such as Google, Facebook, Amazon, etc. In other words, it’s not the cloud per se that will save enterprise IT, but thinking and acting like the major cloud providers across all the key dimensions of IT that will. And I’m saying this even with the knowledge of outages by several of these firms over the past week. Why? Because given the innovative nature of these organizations, the “system” will learn and get even better. This is the theme behind Nassim Taleb’s book Antifragile: Things That Gain from Disorder. Antifragile systems aren’t perfect but they are designed to get better when stressed. And the systems (and by systems I mean not just the technology, but the people, processes and even culture) at these firms get stressed every day, so even when they fail, it winds up being a “win.”
I usually receive several objections when I talk to enterprises about copying the Web-scale IT methods found in these leading organizations, but let me focus on two. The first is related to size, in the context of “we’ll never be as big as Google so why is this concept relevant to us?” There are many responses to this but the one that I prefer to use is that while the term “Web-scale IT” implies scale in terms of size, there is also a scale in terms of speed. I tell my clients that there is nothing stopping them from being as agile as these organizations. Indeed, the much smaller sizes of many enterprises might actually enable them to achieve higher IT velocity than their larger cloud counterparts.
The second objection is often based on a comparison of businesses and business models. I hate to say it, but there’s often an underpinning of some smugness because the subtle message is that the enterprise is a “real business” (like a financial services firm or a manufacturer) and not one that provides searches, enables social networking or delivers videos. There are some estimates that the recent five minute outage at Google cost the company around $ 545,000. I don’t know about the real business implication, but that’s real money and I’m pretty sure that teams in Google are at work to try to see that it doesn’t happen again.
Now, let’s turn to the vendor community. While the impact on the enterprise of Web-scale IT will be large, the impact on the vendor community may well be even larger. Suppliers of traditional products and services will not go away, but over time they will largely lose their high degree of technology influence. Why? Because the enterprise IT leaders that “get it” and start to build-out their own Web-scale IT architectures will see that they no longer have to live with the historical trade-off of cost versus function. Open source hardware and software is no longer a compromise solution. In my area of IT operations technology, many open source management products are the equivalent if not better than their closed source alternatives (they’re certainly more innovative). And we’re starting to see the same thing in the hardware world through efforts like the Open Compute initiative from Facebook.
At Gartner, we’ll also have to make sure that we don’t rest on our laurels. While we have a business model that has withstood the test of time and technology, we will also have to continue to innovate in order to assist our enterprise clients as they begin their journey towards this new environment. Indeed, I have received substantial internal backing within the company to pursue the Web-scale IT concept and to see where it heads. One benefit already is that I’m working with analysts in other areas such as application architecture and server hardware design so that we can provide integrative guidance across the entire IT value chain.
I’ve burned my boats. Is anyone else prepared to leave the conventional IT world behind?
Category: Cloud Cloud management IT management Tags:
by Cameron Haight | July 15, 2013 | 2 Comments
The other day, my colleague George Spafford emailed a link to me on a 2009 TED Talk by Simon Sinek on “How great leaders inspire action.” Even through the video is now several years old, the argument that Simon makes is timeless. The key graphic in his presentation is re-created below:
Sinek states that most organizations know what they do and also how they do it (i.e., perhaps through some value differentiating process, etc.). He argues, however, that few organizations know why they do what they do (beyond of course just generating a profit). From a strategy perspective, these conventional organizations operate from outside in with respect to his model which Sinek calls “The Golden Circle.”
What Sinek argues is that great, inspiring organizations and people (he cites Apple Computer, the Wright Brothers and Dr. Martin Luther King) think, act and operate the same way which is the opposite from everyone else. They start from the inside of the model or “why” because they have a purpose, a belief or a cause that is the DNA embedded in everything that they do, from the products that they build, to the marketing messages that they develop and ultimately to the people that they hire.
As I watched the video, I started to ask myself if I could think of any IT operations management companies that could be included in the list of inspiring entities that Sinek cites? The results that came back to me were pretty sparse. Many were companies that were early movers in the DevOps arena with their almost evangelical zeal for their mission to change the conventional IT operations environment. And one or two others pre-dated the DevOps rise with their own edgy approaches. I wonder, however, how many of these firms will manage to retain the quality that identifies them over a decade or more like an Apple?
I have a personal example of this. Prior to coming to Gartner, I worked at BMC Software in a variety of roles. Coming from IBM, it was quite a shock to me. With FY 1993 sales of almost $ 240 million based in of all places Houston, the company was 13 years old (and had been public for 5 of those) and yet still operated very much like a Silicon Valley start-up. Developers walked around in jeans, shorts and flip flops wearing tee-shirts that said “Best Little Software House in Texas” on them. The company had “product authors” that designed software that did amazing things in mainframes – things that even the inventor of the MVS operating system had difficulty replicating during that era. Sales was unconventional for that time as well – it was rare for an account rep to actually visit their customer as most interaction was done over the phone leveraging a “try and buy” model which was even more amazing when you consider that many software licenses regularly cost over $ 100,000.
But today we see a company that has been a perennial member of the “Big Four” in IT operations management now in the process of going private. There are many possible reasons for this but I like to think it has a lot to do with the issue surrounding Simon Sinek’s question of “why.” And to be clear, this issue is not specific to BMC, but can also found in almost all of their competitors, i.e., CA Technologies, HP, IBM, etc., as well as numerous other large and small firms in the management industry. Why do they exist and by extension, what is their dream? How they answer this will help us to better understand their future prospects.
Category: IT management Tags: agile, Apple, Big Four, BMC, DevOps
by Cameron Haight | May 16, 2013 | 1 Comment
In a research note that was published yesterday, Gartner introduced the term “web-scale IT.” What is web-scale IT? It’s our effort to describe all of the things happening at large cloud services firms such as Google, Amazon, Rackspace, Netflix, Facebook, etc., that enables them to achieve extreme levels of service delivery as compared to many of their enterprise counterparts.
In the note, we identify six elements to the web-scale recipe: industrial data centers, web-oriented architectures, programmable management, agile processes, a collaborative organization style and a learning culture. The last few items are normally discussed in the context of DevOps, but we saw a need to expand the perspective to include the changes being made with respect to infrastructure and applications that act to compliment DevOps capabilities. So, we’re not trying to minimize DevOps in any way because we view it as essential to “running with the big dogs,” but we’re also saying that there’s more that needs to be done with respect to the underlying technology to optimize end-to-end agility.
In addition, while the term “scale” usually refers to size, we’re not suggesting that only large enterprises can benefit. Another scale “attribute” is speed and so we’re stating that even smaller firms (or departments within larger IT organizations) can still find benefit to a web-scale IT approach. Agility has no size correlation so even more modestly-sized organizations can achieve some of the capabilities of an Amazon, etc., provided that they are willing to question conventional wisdom where needed.
Web-scale IT is not one size fits all as we don’t want to replace one IT dogma with another. In true pace-layered fashion, use the approach to IT service delivery that works best for your customers. Gartner suggests that so-called “systems of innovation” which are applications and services needing high rates of change are the more likely initial candidates, but IT organizations are urged to experiment to see what makes sense for them.
Stay tuned for more on web-scale IT from Gartner in the future!
Category: IT management Tags: agile, DevOps, Web-scale IT, WOA
by Cameron Haight | November 8, 2011 | 1 Comment
Much of the discussion around tooling in the DevOps environment focuses on automation of the configuration and release management processes. Somewhat lower on the radar screen are tools (and processes) for the monitoring of an evolving IT environment. This excellent presentation by Patrick Debois, Israel Gat and Andrew Shafer talks about how the metrics that are collected through the monitoring process have to change. The tools, however, may also be morphing as well. “Big management data” will likely lead to a change in the architecture of the technologies that we use for monitoring. Established companies like Splunk have of course been trailblazers in this regard, but others are also starting to arise. Boundary is another company which recently came out of stealth mode to potentially keep your eye on. I’m working now on a research note detailing some of the non-conventional wisdom that has accrued when it comes to DevOps-oriented monitoring. I plan to follow this up with some additional writing on how management architectures will increasingly look like their business application peers in the big data world so stay tuned.
Category: Uncategorized Tags:
by Cameron Haight | November 2, 2011 | Comments Off
I recently downloaded the movie “Father of Invention” where actor Kevin Spacey describes himself as a “fabricator” that combines old inventions into new ones that he then sells on infomercials. The movie was okay, but as I was watching and eating my popcorn, I kept coming back to the fabricator term. I think this is a good way to describe what a lot of the “Big Four” management vendors do. And this is both good news and bad news. These large firms mostly acquire existing technologies, often add some IP of their own and then introduce something with added value (such as BSM, etc.). It’s been a good business model which manages risk for the large companies and offers a liquidity event to an “innovator” firm whose investors may be concerned about the cost of building out a sales channel. In economic terms, this would be viewed as an efficient use of capital as the larger management firms usually have the capabilities to more effectively take the acquired technology to a broader level of adoption. However, there may be an industry downside to this and it relates to innovation.
The traditional management vendors have R&D groups to be sure, but it tends to be little “r” or research and mostly big “d” or development (some would say integration). True, companies like HP have a substantial research effort, but historically only some of this seems applicable to the IT management space although this may be changing. Net is that when acquisitions occur among these larger players, we often seem to see some degree of stagnation of the purchased technology. While this too may be starting to change, I still expect that we will see more of this cycle play out in the years to come as new markets emerge. Thus we may need to start looking elsewhere for “sustainable” innovation in the IT management marketplace.
One area that I have been following more closely is the work being done by the large cloud services providers as part of my DevOps coverage. Google and Yahoo are engaging in widespread cloud research and even organizations such as Netflix and Flickr are providing information on some of their engineering and operations-oriented work. Why is this important to the management space? Because they often eat their own dog food, i.e., what they build for their applications needs often winds up playing a role in their management infrastructure. Unlike some start-ups that may be still trying to determine market requirements, these organizations are building technology that they need today and thus has immediate application. Hence we are starting to see more scalable management technology (search for the presentation “Distributed Computing at Google plus see here) and new operations approaches starting to come to the forefront.
While many of these tools and processes may not find direct application within the enterprise, I believe that some of this progress will find its way into the more traditional corporate world. In fact, I assert that future management platforms will likely incorporate more of a Web 2.0-style architecture paradigm if for no other reason to deal with a future involving “big management data". In a way, these large cloud services providers are becoming new-age NASAs helping to develop technology for a specific need that will likely see some degree of more broad-based commercialization down the road. They may very well become the IT management innovators of the future.
Category: Uncategorized Tags:
by Cameron Haight | October 24, 2011 | 5 Comments
The CAP Theorem states that it is impossible for a distributed system to simultaneously have the characteristics of consistency, availability and partition tolerance. I was recently wondering if there were analogs of this type of thought in one of the areas that I’ve historically covered at Gartner, i.e., performance monitoring. In other words, what were the trade-offs in terms of performance monitoring technologies? What got me thinking about this was the notion that you sometimes find in the DevOps philosophy that seems to suggest “measure everything.” Given that the “object set” requiring oversight is potentially rapidly increasing due to cloud, etc., would this require a change in the conventional wisdom of how we currently monitor IT infrastructures? What I came up with was the following:
- Consumption (of resources)
- Accuracy (of data)
- Depth (of collection)
So, in a process like with the CAP Theorem, with “CAD” you could have:
- D+A : You can instrument deeply (D) and obtain great (A) accuracy (frequent sampling intervals) but you pay a resource consumption (C) penalty
- D+C: You can instrument deeply (D) and minimize consumption (C) but you give up (A) accuracy (due to larger sampling intervals) to achieve it
- A+C: You can have accuracy (A) and minimize consumption (C) but you will need to scale back the depth of variables (D) collected
I know that modern monitoring technology has increasingly incorporated the concept of performance impact “governors,” but I’m not sure that this invalidates the concept. Also, I’m not sure if latency needs to somehow be factored in (not only due to the larger monitoring object set, but also the physical impact of geographical separation ala the cloud). Finally, maybe the CAD concept won’t matter anyway with the performance capabilities found in modern servers and high bandwidth networks. I’m throwing this open to the broader community to see if there are any errors in my logic and/or opposing views. Many thanks.
Category: Uncategorized Tags: