by Cameron Haight | May 16, 2013 | 1 Comment
In a research note that was published yesterday, Gartner introduced the term “web-scale IT.” What is web-scale IT? It’s our effort to describe all of the things happening at large cloud services firms such as Google, Amazon, Rackspace, Netflix, Facebook, etc., that enables them to achieve extreme levels of service delivery as compared to many of their enterprise counterparts.
In the note, we identify six elements to the web-scale recipe: industrial data centers, web-oriented architectures, programmable management, agile processes, a collaborative organization style and a learning culture. The last few items are normally discussed in the context of DevOps, but we saw a need to expand the perspective to include the changes being made with respect to infrastructure and applications that act to compliment DevOps capabilities. So, we’re not trying to minimize DevOps in any way because we view it as essential to “running with the big dogs,” but we’re also saying that there’s more that needs to be done with respect to the underlying technology to optimize end-to-end agility.
In addition, while the term “scale” usually refers to size, we’re not suggesting that only large enterprises can benefit. Another scale “attribute” is speed and so we’re stating that even smaller firms (or departments within larger IT organizations) can still find benefit to a web-scale IT approach. Agility has no size correlation so even more modestly-sized organizations can achieve some of the capabilities of an Amazon, etc., provided that they are willing to question conventional wisdom where needed.
Web-scale IT is not one size fits all as we don’t want to replace one IT dogma with another. In true pace-layered fashion, use the approach to IT service delivery that works best for your customers. Gartner suggests that so-called “systems of innovation” which are applications and services needing high rates of change are the more likely initial candidates, but IT organizations are urged to experiment to see what makes sense for them.
Stay tuned for more on web-scale IT from Gartner in the future!
Category: IT management Tags: agile, DevOps, Web-scale IT, WOA
by Cameron Haight | November 8, 2011 | 1 Comment
Much of the discussion around tooling in the DevOps environment focuses on automation of the configuration and release management processes. Somewhat lower on the radar screen are tools (and processes) for the monitoring of an evolving IT environment. This excellent presentation by Patrick Debois, Israel Gat and Andrew Shafer talks about how the metrics that are collected through the monitoring process have to change. The tools, however, may also be morphing as well. “Big management data” will likely lead to a change in the architecture of the technologies that we use for monitoring. Established companies like Splunk have of course been trailblazers in this regard, but others are also starting to arise. Boundary is another company which recently came out of stealth mode to potentially keep your eye on. I’m working now on a research note detailing some of the non-conventional wisdom that has accrued when it comes to DevOps-oriented monitoring. I plan to follow this up with some additional writing on how management architectures will increasingly look like their business application peers in the big data world so stay tuned.
Category: Uncategorized Tags:
by Cameron Haight | November 2, 2011 | Comments Off
I recently downloaded the movie “Father of Invention” where actor Kevin Spacey describes himself as a “fabricator” that combines old inventions into new ones that he then sells on infomercials. The movie was okay, but as I was watching and eating my popcorn, I kept coming back to the fabricator term. I think this is a good way to describe what a lot of the “Big Four” management vendors do. And this is both good news and bad news. These large firms mostly acquire existing technologies, often add some IP of their own and then introduce something with added value (such as BSM, etc.). It’s been a good business model which manages risk for the large companies and offers a liquidity event to an “innovator” firm whose investors may be concerned about the cost of building out a sales channel. In economic terms, this would be viewed as an efficient use of capital as the larger management firms usually have the capabilities to more effectively take the acquired technology to a broader level of adoption. However, there may be an industry downside to this and it relates to innovation.
The traditional management vendors have R&D groups to be sure, but it tends to be little “r” or research and mostly big “d” or development (some would say integration). True, companies like HP have a substantial research effort, but historically only some of this seems applicable to the IT management space although this may be changing. Net is that when acquisitions occur among these larger players, we often seem to see some degree of stagnation of the purchased technology. While this too may be starting to change, I still expect that we will see more of this cycle play out in the years to come as new markets emerge. Thus we may need to start looking elsewhere for “sustainable” innovation in the IT management marketplace.
One area that I have been following more closely is the work being done by the large cloud services providers as part of my DevOps coverage. Google and Yahoo are engaging in widespread cloud research and even organizations such as Netflix and Flickr are providing information on some of their engineering and operations-oriented work. Why is this important to the management space? Because they often eat their own dog food, i.e., what they build for their applications needs often winds up playing a role in their management infrastructure. Unlike some start-ups that may be still trying to determine market requirements, these organizations are building technology that they need today and thus has immediate application. Hence we are starting to see more scalable management technology (search for the presentation “Distributed Computing at Google plus see here) and new operations approaches starting to come to the forefront.
While many of these tools and processes may not find direct application within the enterprise, I believe that some of this progress will find its way into the more traditional corporate world. In fact, I assert that future management platforms will likely incorporate more of a Web 2.0-style architecture paradigm if for no other reason to deal with a future involving “big management data". In a way, these large cloud services providers are becoming new-age NASAs helping to develop technology for a specific need that will likely see some degree of more broad-based commercialization down the road. They may very well become the IT management innovators of the future.
Category: Uncategorized Tags:
by Cameron Haight | October 24, 2011 | 5 Comments
The CAP Theorem states that it is impossible for a distributed system to simultaneously have the characteristics of consistency, availability and partition tolerance. I was recently wondering if there were analogs of this type of thought in one of the areas that I’ve historically covered at Gartner, i.e., performance monitoring. In other words, what were the trade-offs in terms of performance monitoring technologies? What got me thinking about this was the notion that you sometimes find in the DevOps philosophy that seems to suggest “measure everything.” Given that the “object set” requiring oversight is potentially rapidly increasing due to cloud, etc., would this require a change in the conventional wisdom of how we currently monitor IT infrastructures? What I came up with was the following:
- Consumption (of resources)
- Accuracy (of data)
- Depth (of collection)
So, in a process like with the CAP Theorem, with “CAD” you could have:
- D+A : You can instrument deeply (D) and obtain great (A) accuracy (frequent sampling intervals) but you pay a resource consumption (C) penalty
- D+C: You can instrument deeply (D) and minimize consumption (C) but you give up (A) accuracy (due to larger sampling intervals) to achieve it
- A+C: You can have accuracy (A) and minimize consumption (C) but you will need to scale back the depth of variables (D) collected
I know that modern monitoring technology has increasingly incorporated the concept of performance impact “governors,” but I’m not sure that this invalidates the concept. Also, I’m not sure if latency needs to somehow be factored in (not only due to the larger monitoring object set, but also the physical impact of geographical separation ala the cloud). Finally, maybe the CAD concept won’t matter anyway with the performance capabilities found in modern servers and high bandwidth networks. I’m throwing this open to the broader community to see if there are any errors in my logic and/or opposing views. Many thanks.
Category: Uncategorized Tags:
by Cameron Haight | October 18, 2011 | 11 Comments
In my DevOps presentation today at Gartner Symposium (many thanks for the great crowd by the way), I mentioned only partly in jest that I think the DevOps movement needs a new name – or at least some different marketing if I may be so bold (my apologies to Patrick Debois). Something perhaps edgier – something that potentially evokes (at least in my mind) the rebellion against much of the conventional IT wisdom. My thought? Occupy Information Technology or “Occupy IT.” Is this too edgy? Is this too political (by the way, it’s not meant to have anything to do with the non-IT world other than the rebellion theme)? Feel free to comment.
Category: Uncategorized Tags:
by Cameron Haight | October 16, 2011 | 3 Comments
I’ll be presenting on DevOps and the Cloud Operating Model on Tuesday at our Symposium conference in Orlando (note: record attendance has been announced for the event). My focus will be in keeping with the theme of the show, i.e., “Re-imagine IT.” DevOps seems to have captured the interest of a lot of the attendees if my calendar of one-on-one meetings is any indication. Look for comments from me throughout the week (@hchaight and here). You can also follow some of the other noteworthy conference information live here.
Category: Uncategorized Tags: symposium
by Cameron Haight | October 4, 2011 | 2 Comments
Recently, the U.S. Defense Advanced Research Projects Agency (DARPA) announced a solicitation for the kick-off of the 100 Year Starship (100YSS) project. The announcement describes the project as:
The 100 Year Starship™ (100YSS™) is a project seeded by the Defense Advanced Research Projects Agency (DARPA), with NASA Ames Research Center as executing agent, to develop a viable and sustainable non-governmental organization for persistent, long-term, private-sector investment into the myriad of disciplines needed to make long-distance space travel viable.
The solicitation is a (very small) down payment on a project that is to be terminated on 11/11/2111. At its core, the project is imagineering on a grand scale. The website goes on to say:
The 100-Year Starship is about more than building a spacecraft or any one specific technology. Through this effort, DARPA seeks to inspire several generations to commit to the research and development of breakthrough technologies and cross-cutting innovations across myriad disciplines such as physics, mathematics, engineering, biology, economics, and psychological, social, political and cultural sciences. The goal is to pursue long-distance space travel while delivering ancillary results along the way that will benefit mankind.
Putting aside the question as to whether or not as a society this something we should be doing, the technical challenges are of course both daunting and exciting. To put it in perspective, Richard Obousy, associated with a similar British effort called Project Icarus, is quoted as stating that using chemical rockets to travel interstellar distances like those that took us to the moon would require “more fuel than exists mass in the universe!” Clearly then, we can’t use traditional propulsion methods to get there.
Okay, interesting stuff Cameron and you’ve let on that you’re probably a Star Trek fan – what’s the connection to IT management? I don’t think that we obviously need a one hundred year project, but the IT management market is in some serious need of imagineering. Each week I sit through a number of technology provider presentations that all largely seem to be a variation on a theme. Having managed vendor development teams in the past I can appreciate much of the engineering, not to mention go to market efforts, but almost none of these vendor pitches are challenging us to re-think about how to do IT management “different.”
With respect to cloud management, there’s pretty much a required checklist: self-service portal, service catalog, service models, workflow engine, provisioning system, monitoring and metering agents, etc. There may be a service governor or two with some automated placement capabilities. A few of these manage to even enforce future reservations. But you don’t have to look too hard to see that a lot of this technology also existed when the world was still largely physical. The IT management market is taking the Apollo path to the moon, I mean clouds. Mainframe management ala IBM NetView was our Mercury Project. Our Gemini equivalent occurred with the emergence of HP OpenView and SunNet Manager in support of client/server environments. And Apollo is by and large where we are with cloud management technology today. I’m not trying to demean either effort, but as with Apollo, with respect to today’s cloud management tools we’re largely solving a known engineering problem. Important stuff, but we’re not really re-writing the IT management manual.
That is why I have been focused on DevOps and my own research on what I call the Cloud Operating Model. These approaches have largely derived from the real-world needs of large, public cloud service providers who saw the existing IT operations process and management landscape largely wanting and thus developed their own approaches. And while the increasing scale and support capabilities of these Web 2.0-style management architectures are impressive, we still find ourselves traveling at sub-light speed. To get to “warp” drive, we need to have a longer term vision that will require us to increasingly think out of the box. This will include improved IT management capabilities in areas such as:
- Visualization – how do we represent potentially hundreds of thousands of objects and their data points in a meaningful manner? Maybe we could take some pointers from the security industry as a start.
- Discovery – discovering the physical infrastructure was challenging, the virtual environment difficult and clouds, especially public clouds, is almost impossible (today). How then do we identify available services (not to mention our IT assets) when we may not know a priori where they are? Perhaps the solution is buried somewhere in here or maybe the answer is to go tribal.
- Automation – most automation systems are primitive – they don’t prevent us from implementing poor workflow design, etc. Anyone remember this show? They probably could have used this.
- Analytics – the IT world is getting more, not less complex, giving rise to emergent behaviors. In response, there is increasing interest in statistical machine learning methods such as Bayesian probabilities, genetic algorithms, neural networks, etc. But will these allow us to solve increasingly “wicked” problems?
- Instrumentation – how do we get “agents” or other information collection mechanisms where we need them, when we need them whether on- or off-premise and with little to no demands on end users? Here’s a stealthy proposal for virtual environments.
- Data management – with the need to collect more data on perhaps faster cycles, how will we deal with “big management data?” This group tried some new approaches given access to some of the systems at Google.
I’m probably missing some areas (such as data models, etc.) but the above is I think a good start. Anyone have an idea for what to call this project? 5YITM? 10YITM? Note: I was never good at naming so I’ll leave this to someone with better branding skills than I. In the meantime as an industry, let’s get busy on moving beyond the clouds and reaching for the stars!
P.S. For those of you interested in seeing what a post-NASA world looks like, check out: Armadillo Aerospace, Bigelow Aerospace, Blue Origin, Scaled Composites and SpaceX.
Category: Uncategorized Tags:
by Cameron Haight | September 6, 2011 | 5 Comments
I along, with colleagues David Cole and Jarod Greene have written recently about the emerging adoption of social media technology in support of IT operations and service management activities (for examples see here, here and here). On the IT operations side, there are several use cases that are evolving, i.e., end users helping one another, end users interacting with IT operations teams and intra-IT operations team activity coordination.
The latter was case was more fully explained in my earlier note Collaborative Operations Management: A Next-Generation Management Capability. In this I explained that the current crop of IT management tools provide little in the way of support for many of the unstructured activities that go on within an operations team (I state that the acronym ITSM really stands for IT Structured Management and the subtle irony that in order to actually get work done within an ITSM product today, that you usually have to go “outside” the tool to get it initiated, i.e., via email, etc.).
In addition, because so much IT operations activity remains unaccounted for, the knowledge of how to perform specific IT operations processes often fails to be captured and turned into reusable assets. The adoption of social media technology potentially presents IT organizations with a means to better retain this information while positioning to become the future “glue” that ties together many of the more conventional IT management technologies. Social ITM technology may also be a means by which to improve collaboration between development and operations teams ala DevOps. Stay tuned for more on this evolving technology area.
Category: Uncategorized Tags:
by Cameron Haight | September 5, 2011 | Comments Off
It’s Labor Day here in the United States. The holiday is meant to recognize the contributions of the American worker to overall our freedom and economic well being (although some of the luster of the day has worn off with the continuing economic troubles here and abroad). So, right now you might be saying, “Cameron, what’s the deal with tying Labor Day to the DevOps movement?” Well, in the same vein of Labor Day representing workers (versus management), I see DevOps as also being IT worker-centric. Frederick Taylor would likely not have approved of DevOps as he largely saw workers as “do-ers” and management as the “thinkers.”
Yet in many ways DevOps has evolved from the ground up, i.e., it was not the brainchild of CIOs, but rather IT operations personnel, technical consultants and others that toiled in the trenches and decided that there must be a better way. Many people have tried and continue to try to define exactly what DevOps is (including myself in Gartner research). I like best the definition provided by Lindsay Holmwood. The first item in his listing is: improving collaboration and communication between development and operations teams. While he may not have intended it as such, I take this statement to be worker-focused, and not manager or management-related (the latter can be included, of course, but they are not the drivers per se). This seems to be consistent though with the Agile Manifesto with its emphasis on individuals.
While the industry may continue looking for a way to define DevOps more prescriptively, at the end of the day we know that it comes down to people – the IT “labor.” Empowering the people within IT, giving them the license to be innovative and to better participate in the fruits of their labors will hopefully result in the overall betterment of their company. Management, or managers, can facilitate this process by better understanding what motivates many of us. What they may find is that many of us “labor” in this field because we love it and thus traditional incentive systems may find less success. Finally, when you have a supportive environment that lets you work at something that you are fond of, the end result is usually superior. Thus, DevOps is win/win for both labor and management. Happy Labor Day!
Category: Uncategorized Tags:
by Cameron Haight | August 26, 2011 | 1 Comment
The news of course this week is that of Steve Jobs resigning as CEO of Apple. And as questions surround how Tim Cook will fare stepping into the role formerly held by an industry legend, perhaps we who work in the IT operations management industry should be asking “where is our own Steve Jobs?” Per Wikipedia, Jobs is listed as either primary inventor or co-inventor for more than 230 patents or patent applications. While his actual contribution to these is unclear, his individual impact on the personal computing, mobile communications and media industries has been by almost any measure enormous. For this, he has been called a visionary who admits to being guided by a quotation from hockey great Wayne Gretzky, i.e., “I skate to where the puck is going to be, not where it has been.”
The IT operations management market is large. For 2010, Gartner estimated that its size was nearing 16 billion dollars. There have been and continue to be many successful firms in this market which is a testament to the business acumen of several of its leading managers. But we don’t have a Steve Jobs equivalent. In general, we are an industry that follows the puck usually in response to some new technological advancement (i.e., client/server, Java, virtualization and now cloud computing). In our defense, I’ll admit that it’s difficult for IT operations management tools to actually lead a technological revolution. After all, the “thing” to be managed usually has to be created first. What we shouldn’t excuse, however, is a record which largely shows that when new computing paradigms emerge, many within our industry merely look for ways to adapt existing tools and processes in their support. Or they might play it safe and wait for the trend to become more established and then acquire one of the early entrepreneurs in the market. No company likes failures especially when the stock market might not look so kindly upon our mistakes, but that’s how we learn and hopefully, ultimately profit. Looking at his track record, Steve Jobs learned a lot yet still succeeded in creating a firm that recently was the world’s largest in terms of market capitalization.
Gartner’s Hype Cycle for Emerging Technologies, 2011, lists some of the new technological innovations that will likely become part of the fabric of our businesses in the future. Key themes include the connected world, interface trends, analytical advances and new digital frontiers. Who will be the visionary in the IT operations management space to not only be first to adapt some of these new capabilities, but to also help shape them and thus be potentially rewarded with an innovation premium, perhaps like that of Apple? I’m open for nominations.
Category: Uncategorized Tags: