by Jonah Kowall | June 4, 2014 | 3 Comments
Thanks for those who came out to the Gartner IOM conference in Berlin which wrapped up yesterday. Interesting things happening and discussions with clients and attendees. We have the US version of this conference next week in Orlando and I will be there!
Posting this from lovely Budapest.
Keeping on the theme of log analytics, which comes up a lot in conversations related to both unified monitoring (infrastructure availability), and APM we are seeing this technology as particularly applicable across monitoring disciplines and silos within organizations.
There is yet another company we haven’t highlighted which has been flying under the radar since being founded in Israel in 2003 and has been building index and search technology which differentiates itself by doing deeper automated analysis of the data before the user is involved in querying the data.
The software discovers patterns and problems within the log data and is more proactive than other log analytics tools used by IT Operations. Data is searched by the user, and additional layers are placed on top of the data providing context. There are no rules needed to enable these features. The product has its own indexing and storage system, but also can support Hadoop data stores as well.
Version 5.0 was recently launched which improves upon the user interface slightly and also adds in native support of logstash (you can read my other posts on the ELK stack).
Once additional data is found you can add that insight to the query:
In the Screenshot below you can see some of the unique ways data is layered within the visualization timeline.
The product could use a more modern and usable user interface, easier implementation of collection agents and technologies to help get data into the system. These basic functions would help exploit a very impressive analytics engine.
The company has not had particularly good visibility due to being a self-funded technology focused company, they have not invested in marketing or sales efforts to date. This doesn’t mean they haven’t done well, they have some large installs of the technology which are impressive. This has resulted in less growth than competitors have had, but they are looking to change that.
Category: Analytics IT Operations Logfile Monitoring Pick of The Week Tags:
by Jonah Kowall | May 29, 2014 | 9 Comments
Gartner is beginning the research process for the 2014 Magic Quadrant for Application Performance Monitoring. Will Cappelli and myself will be running the research as we have for the last few years. The release is currently planned for the 4th quarter of 2014, and we will be sending out surveys to vendors next week. Based on the data below, if you believe you qualify for the research please get in touch with myself via email or Twitter (@jkowall):
The 2013 Magic Quadrant saw minor changes around the weighting and importance of analytics and mobile. The 2014 research once again sees importance in these two critical trends, but also a focusing on SaaS delivery being essential as enterprises continue to adopt various public cloud delivered applications, services, and infrastructure.
Market Definition Is Unchanged
In the 2014 Magic Quadrant, the market definition will remain unchanged from the 2013 version (see “Magic Quadrant for Application Performance Monitoring”). While we have made changes in the application definition. Gartner defines an application as a software program or group of programs that interact with their environment via defined interfaces and which are designed to perform a specific range of functions. They may be end user facing, presenting a user interface, or provide the interface between two applications themselves. Applications are not (normally) wholly independent as they typically require an operating system or multiple operating system instances to manage their use of physical and logical computing, storage and network resources within a data center, or provided by 3rd parties.
Gartner defines APM as having five dimensions of functionality:
- End-user experience monitoring (EUM) — The capture of data about how end-to-end latency, execution correctness and quality appear to the real user of the application. Secondary focus on application availability may be accomplished by synthetic transactions simulating the end user.
- Application topology discovery and visualization — The discovery of the software and hardware infrastructure components involved in application execution, and the array of possible paths across which these components communicate to deliver the application.
- User-defined transaction profiling — The tracing of user-grouped events, which comprise a transaction as they occur within the application as they interact with components discovered in the second dimension; this is generated in response to a user’s request to the application.
- Application component deep dive — The fine-grained monitoring of resources consumed and events occurring within the components discovered in application topology discovery and visualization dimension. This includes server-side components of software being executed.
- IT operations analytics — The combination or usage of techniques, including complex operations event processing, statistical pattern discovery and recognition, unstructured text indexing, search and inference, topological analysis, and multidimensional database search and analysis to discover meaningful and actionable patterns in the typically large datasets generated by the first four dimensions of APM. Additionally these data sets are increasingly being analyzed for not only operational information, but business and software analytics.
Vendors will be required to meet the following criteria to be considered for the 2014 APM Magic Quadrant. In comparison to 2013, we have adjusted numerical thresholds.
■ The vendor’s APM product must include all five dimensions of APM (EUM, application topology discovery and visualization, user-defined transaction profiling, application component deep dive, and IT operations analytics). The deep-dive monitoring capabilities must include Java and .NET, but also may include one or more key application component types (e.g., database, application server). The solution must include user-defined transaction profiling, IT operations analytics technologies applied to text and metrics collected by the other four dimensions.
■ The APM product must provide compiled Java and .NET code instrumentation in a production environment.
■ Customer references must be located in at least three of the following geographic locations: North America, South America, EMEA, the Asia/Pacific region and/or Japan.
■ The vendor should have at least 50 customers that use its APM product actively in a production environment.
■ The vendor references must confirm they are monitoring at least 200 production application server instances in a production environment.
■ Some features of the APM offering must be available via a SaaS delivery model. This offering must be delivered directly from the vendor.
■ The product must be shipping to end-user clients for production deployment and designated with general availability by July 15th 2014.
■ Total revenue (including new licenses, updates, maintenance, subscriptions, SaaS, hosting and technical support) must have exceeded $5 million in calendar year 2013.
In addition to these criteria, we will be evaluating the vendor’s ability to cross multiple buying centers, as well as its ability to target specific verticals as validated by reference customers. Detailed criteria and subcriteria will be published along with the final research later this year.
While a vendor may meet the inclusion criteria for the APM Magic Quadrant, placement within the finalized Magic Quadrant will depend on its scoring in a number of categories. Ratings in these categories will be used to determine final placement within the 2014 APM Magic Quadrant. The 2014 evaluation criteria are based on Completeness of Vision and Ability to Execute.
Completeness of Vision
Market Understanding: This criterion evaluates vendor capabilities against future market requirements. The market requirements map to the market overview discussion and look for the following functionality:
■ EUM, including real and synthetic availability testing
■ Runtime application architecture discovery
■ User-defined transaction profiling
■ Application component deep dive
■ IT operations analytics for problem isolation and resolution
■ IT operations analytics to answer questions about software or business execution■ Ability to address the mobile APM market
Marketing Strategy: We evaluate the vendor’s capability to deliver a clear and differentiated message that maps to current and future market demands, and, most importantly, the vendor’s commitment to the APM market through its website, advertising programs, social media, collaborative message boards, tradeshows, training and positioning statements.
Sales Strategy: We evaluate the vendor’s approach to selling APM to multiple buying centers. We also evaluate the vendor’s ability to sell in the appropriate distribution channels, including channel sales, inside sales and outside sales.
Offering (Product) Strategy: We evaluate product scalability, usability, functionality, and delivery model innovation. We also evaluate the innovation related to delivery of product and services.
Business Model: This is our evaluation of whether the vendor continuously manages a well balanced business case that demonstrates appropriate funding and alignment of staffing resources to succeed in this market. Delivery methods will also be evaluated as business model decisions, including the strength and coherence of on-premises and SaaS solutions.
Vertical/Industry Strategy: We evaluate the targeted approaches in marketing and selling into specific vertical industries. Commonly, APM solutions are bought and targeted toward the financial services, healthcare, retail, manufacturing, media, education, government and technology verticals.
Innovation: This criterion includes product leadership and the ability to deliver APM features and functions that distinguish the vendor from its competitors. These include unique approaches to application instrumentation, mobile visibility, and catering towards the increased demands of continuous release. Specific considerations include resources available for R&D, and the innovation process.
Geographic Strategy: This is our evaluation of the vendor’s ability to meet the sales and support requirements of IT organizations worldwide. In this way, we assess the vendor’s strategy to penetrate emerging markets.
Ability to Execute
Product/Service: Gartner evaluates the capabilities, quality, usability, integration and feature set of the solution, including the following functions:
■ Day-to-day maintenance of the product
■ Ease and management of deploying new APM
■ Ease of use and richness of functions within the product
■ Product deployment options and usability
■ Integration of overall APM-related portfolio or unified APM offering
Overall Viability (Business Unit, Financial, Strategy and Organization): We consider the vendor’s company size, market share and financial performance (such as revenue growth and profitability). The leadership within the company in terms of people, what employees think of the leadership, and the ability to drive the company forward. We also investigate any investments and ownership, and any other data related to the health of the corporate entity. Our analysis reflects the vendor’s capability to ensure the continued vitality of its APM offering.
Sales Execution/Pricing: We evaluate the vendor’s capability to provide global sales support that aligns with its marketing messages; its market presence in terms of installed base, new customers, and partnerships; and flexibility and pricing within licensing model options, including packaging that is specific to solution portability.
Market Responsiveness and Track Record: We evaluate the execution in delivering and upgrading products consistently, in a timely fashion, and meeting road map timelines. We also evaluate the vendor’s agility in terms of meeting new market demands, and how well the vendor receives customer feedback and quickly builds it into the product.
Marketing Execution: This is a measure of brand and mind share through client, reference and channel partner feedback. We evaluate the degree to which customers and partners have positive identification with the product, and whether the vendor has credibility in this market.
Customer Experience: We evaluate the vendor’s reputation in the market, based on customers’ feedback regarding their experiences working with the vendor, whether they were glad they chose the vendor’s product and whether they planned to continue working with the vendor. Additionally, we look at the various ways in which the vendor can be engaged, including social media, message boards and other support avenues.
Category: APM IT Operations Monitoring Tags:
by Jonah Kowall | May 23, 2014 | 31 Comments
Prior to Gartner as an end user I watched the rise of companies like Precise software (I was a customer), and Optier. I’m not going to rehash all of the interesting twists of Precise you can read them here : http://en.wikipedia.org/wiki/Precise_Software the company was once worth well over $500m with some superb technology. With a successful $140m IPO in 2000, they were a high flyer. Through the changes of being bought twice after the IPO the business execution and leadership began to suffer. This was something they did not recover from, eventually resulting in a fire sale of the technology, where it was picked up for pennies on the dollar by Idera last year. We have high hopes Idera can rebuild the Precise technology and meld it with CopperEgg to create a compelling APM solution (of course SaaS and on premise).
Similarly Optier, another great technology innovator in APM, pioneered the use of advanced analytics in the space, didn’t suffer from vision or technology. The issues were once again the leadership outside of the technical parts of the organization. Having raised well over $100m in the last 9 years the company never kept pace with the changes in the market. Optier has finally closed their doors as of yesterday, really sad to see that the transformation from on premise heavy enterprise software to SaaS was not happening fast enough to fix the cash situation. Hopefully someone will acquire some of these great assets and possibly see the transformation through.
I’m starting to see yet another story follow in this similar path, but nothing I can write about yet… We shall see.
It’s certainly been an interesting 3 years at Gartner covering the APM space. Upon joining the company there was some innovation happening and the rise of two of the most well regarded and asked about companies in the APM space. Both AppDynamics and New Relic have been major disruptors in terms of making APM easy, inexpensive, and effective. Regardless of the depth and technical expertise of the companies (which they both clearly do have) it was about the delivery model and execution from a sales, marketing (which matters quite a bit), and senior management vision perspective. These two companies are both young, but the experience and leadership speaks for itself. Both having valuations well over $2b, and a path towards IPO, when they should they need the funding, they are the new generation of APM companies pushing the envelope.
I’m not discounting the technology and size of Compuware, they are going through a pretty drastic transformation themselves, and truly focusing on APM as a core business. Fixing some of the prior mistakes on the business side has been a journey for them, but there is no question on the vision and technology. Expect some disruptive capabilities from Compuware, they are not in follow mode, while many if not all other APM companies are.
There are few other companies in the space truly leading and innovating, but many are trying to change and catch up. The question is with the rate of innovation and change, can anyone actually accomplish that, especially with the level of growth and capital commanded by the leaders in the market.
Please comment here on @jkowall on twitter
Category: APM IT Operations Monitoring Tags:
by Jonah Kowall | May 21, 2014 | 2 Comments
Wanted to give a heads up that we’ve updated a note which was published about 18 months ago around open source and freeware monitoring tools.
Clients only link:
The document covers basic information on the following technologies:
Minisuites : Ipswitch, ManageEngine, Solarwinds
Free Monitoring from Artica ST (Pandora FMS), GroundWork, Icinga, op5, Opsview, Paessler, SevOne, Spiceworks, Splunk, Torch (Graylog2), VMTubro, XpoLog, Zabbix, Zenoss
SaaS monitoring from AppDynamics, AppFirst, Boundary, Datadog, GFI Software, Loggly, New Relic, ScaleXtreme (Citrix now), Splunk, and Sumo Logic
Changes from late 2012 until today:
Since writing our previous research on how to leverage free and low-cost server, network and storage monitoring tools, we’ve seen the following changes in the market:
- AppDynamics — Lite offering provides always-on monitoring, where the previous product was only functional during use
- Artica ST — Pandora FMS, open-source offering
- Icinga — Open-source offering
- Loggly — Relaunched offering targeted at a wider user community
- Torch — Graylog2, a new offering
- XpoLog — Free offering for log analysis
Compared with our earlier research about free and low-cost server, network and storage monitoring, we removed vendors that, unless otherwise noted, left this market:
- Correlsense — Low-cost offering
- Jinspired — Lack of adoption across Gartner client base
- Net Optics — Lack of adoption across Gartner client base
- Quest Software (now part of Dell Software) — Low-cost offering
- VMware — Low-cost offering lacked Gartner client adoption, but the solution is being retooled and better integrated with the VMware vCenter Operations Manager offering.
Sorry I forgot to include a few vendors and solutions in the note which will be in the next revision:
- CA Nimsoft Monitor Snap is a freemium offering covering server monitoring for up to 30 servers without cost. They launched the offering about 7 months ago and have had over 2500 installations of the product!
Category: APM IT Operations Logfile Monitoring SaaS Tags:
by Jonah Kowall | May 17, 2014 | 2 Comments
Sorry no cool vendor this week, instead I’m going to cover some other “cool” stuff coming from a company we don’t normally think of as particularly “cool”. Microsoft is innovating in the systems management space, and creating unique technologies with a broad reach. Microsoft has a significant install base of Operations Manager, they come up very regularly on client inquiry, hence I’ve written several research notes focused on this technology.
I was pleased to make it out to Microsoft TechEd this year, after not making it since 2011 (too many vendor shows, need to rotate). Although this year was an off year in terms of major platform launches, especially around System Center Operations Manager (what was called SCOM previously). Microsoft still announced a slew of new cloud based products around Visual Studio highlights included mobile development based on Apache Cordova to allow for building universal Windows apps (Store and Phone) across Microsoft platforms, as well as iOS and Android.
Within Visual Studio online, the updates around Application Insight were particularly interesting in terms of APM, one of my focus areas. Additionally the announcement of improved functionality and capabilities within System Center Advisor are of interest to my coverage of ITOA and APM. Let’s dig a bit deeper…
I can now more publicly talk about the changes which Microsoft has been making over the last 14 months internally, re-organizing the teams which consisted of the Avicode acquisition (Microsoft’s APM technology) underneath the Visual Studio (Developer Division) from being aligned with the System Center teams. This organization change and strategy change facilitated Microsoft to create a developer focused APM product consisting of instrumentation of Java and .NET along with embedded tools within Visual Studio for creating custom instrumentation.
Through this change Microsoft has launched a SaaS only APM product, which over time will be fed into Operations Manager. They have built specific tooling which integrates into Visual Studio making it easy for developers to write their own instrumentation. This instrumentation can include any metric data or custom messages which are sent to the online service (currently completely free of charge as a preview - http://msdn.microsoft.com/en-us/library/dn481095.aspx) and can be viewed, reported upon, and analyzed. Over time this will provide a broader understanding of software analytics across Microsoft and non-Microsoft technologies and platforms. While there are limitations in the preview, there is a lot which can be done today.
The online portal also leverages the same Global System Monitor (GSM) synthetic availability monitoring which can monitor a single URL or a set of steps which a user may traverse as they are using a web application. This is already available for free to users of Operations Manager (with some limitations). This is a similar synthetic testing capability you often see offered by Compuware Gomez, Keynote, and over 3 dozen other companies. These technologies are now part of developer centric tools in Visual Studio Online as well.
The APM components can also monitor servers or Azure components in terms of usage and performance information by leveraging built in instrumentation or agents.
Short video : http://channel9.msdn.com/Series/Application-Insights-for-Visual-Studio-Online/Application-Performance-Monitoring-APM–Diagnostics-with-Application-Insights
System Center Advisor
This System Center component was previously uninteresting from my coverage perspective, focused on identification of configuration problems of systems. The product has always been cloud only, and something part of the System Center suite.
Microsoft has changed things on the preview of this service, including several security and operations use cases. This begins with sending more collected data from System Center components to the online service, such as leveraging OpsMgr (SCOM) as a data source.
The preview includes capacity planning use cases across systems, both physical and virtual, but more interestingly allows for log analytics technology. Event log data is fed from OpsMgr, but I expect this to expand over time. The log analytics today are based on Elasticsearch on Azure under the covers, but the presentation is all web based. The language and analytics are pretty basic, but useful. The speed of analysis is very good based on the demos I have seen, but I have yet to get it up and running in my lab. Regardless, this is the start of some interesting technology from Microsoft. During the preview this is completely free of charge, and you don’t have any costs for storage since the data is all stored on Microsoft Azure.
Video from TechEd (Fast forward to about 1:05:27 : http://channel9.msdn.com/Events/TechEd/NorthAmerica/2014/DCIM-B381#fbid=)
Hopefully this was helpful, feel free to leave a comment or hit me up on twitter @jkowall
Category: Analytics APM Big Data IT Operations Logfile Mobile Monitoring SaaS Trade Show Tags:
by Jonah Kowall | May 9, 2014 | 9 Comments
Getting this one done for next week, as I will be at Microsoft’s TechEd conference Monday-Wednesday in Houston. If anyone wants to meet up just hit me up on Twitter, I’ll be in meetings and sessions.
There is no question that the most popular vendors which come up regularly for basic availability monitoring are those which offer low cost, easy to use, and effective products that monitor components health for availability. This has been the main reason folks like Microsoft, Solarwinds, and ManageEngine come up very often in monitoring inquiry. Building a product which focuses on ease of use is difficult, this includes the entire experience from from download, POC, implementation, purchasing, day to day use, and maintenance. As engineers we tend to over-engineer, software vendors are guilty as well, the bloated products are designed by listening to each customer request and implementing solutions without stepping back to reconsider the design and usability. The vendor highlighted this week has done a good job rebuilding their product in this manner.
AdRem Software is based in Poland, but has an office in New York, NY. They focus on building a unified monitoring offering, Netcrunch, which handles multiple use cases in the monitoring space. With the recent release of version 8, there has been renewed focus on creating a larger market relevance, and growing the client base. Founded in 1998 they have been selling monitoring products, but we have seen less adoption across our client base, probably due to a lack of sales and marketing investment. The customer base tends to be focused in Japan and Europe, with renewed focus and investment in marketing penetration may improve.
The product features include network monitoring (with topology), flow analysis, server monitoring (including virtualization technologies). Some unique features are agentless monitoring, but the use of ssh to get deeper server monitoring of linux variants (*BSD, MacOS) systems without software agents, which typically cause support pain. The product supports dozens of standard packaged applications found on servers, as most unified monitoring tools do. On the network side of things the product builds topologies of interconnected devices, and presents rich maps. These maps also present the flow data such as bandwidth and data usage of the end points including the servers.
I implemented the product in my lab, the download and install process was very easy and the wizard which includes configuration and auto discovery was very well done. The backend includes standard SQL, proprietary noSQL (for metrics), and a XML schema where state data is kept. This is a easy to implement solution with care paid to the design elements. The unified views include seeing multiple data sets in a single place:
Some of the issues with the product include that the tool is not web based (EDIT: They have a web UI, but it’s more of a second class citizen. It does look nice and shares the same look and feel) , there is still a windows application, making the data less available to other people within the organization who do not have the client. The product is also focused more on the network use cases than server use cases, but it handled server monitoring quite nicely in my testing (see screenshot from my lab above). The company has been around for quite a while, but has remained small in terms of staff and investment. The product is priced quite attractively, in a similar manner so what you see for other low cost tools, such as those mentioned above.
Thanks for reading, please leave comments here or on twitter @jkowall
Category: IT Operations Monitoring NPM Pick of The Week Tags:
by Jonah Kowall | May 2, 2014 | 3 Comments
Lots of interest across the board in how to integrate and deliver in supporting a DevOps philosophy. In this third annual cool vendors in DevOps research there are five vendors, which help DevOps managers, app engineers, and release and cloud managers control the application life cycle.
This year I contributed Caliper.io an interesting monitoring company who focuses on monitoring user experiance of single page applications. Single page applications becoming increasingly popular amongst newer web application architectures, and they change the page interaction paradigm most monitoring and measurement is based on.
I also contributed a write-up for Data Dog, you might have seen other research where they were covered. This innovative SaaS offering provides glimpses at new ideas for event management, collaboration, and open monitoring systems. Of course they also have their own set of challenges.
Ronni Colville and Colin Fletcher contributed MidVision for their deployment software.
Colin Fletcher and Jim Duggan (See we have good DevOps collaboration) contributed Plutora, which provides a SaaS based ARA product with some interesting concepts around release management.
Finally Colin Fletcher included ZeroTurnaround who provide tooling around continuous testing to enable more efficient developer time. They also offer automated release software helping a continuous delivery cycle.
Clients will have access to the full research which highlights much more detail, why the approach of the technology provider is cool, the challenges they face, and who should be investigating or thinking about these innovative and emerging technology companies.
16 April 2014 G00262716
Category: DevOps IT Operations Mobile Monitoring Pick of The Week SaaS Tags:
by Jonah Kowall | April 30, 2014 | 3 Comments
We decided to rename our cool vendors research this year, since we regularly were featuring technologies which were not related directly to performance demands, but also included general analytics technologies applied to infrastructure and operations professionals needs. In the research for this year we saw a similar split between these technologies.
In the research we profiled several vendors:
Will Cappelli included Metafor Software, who provides ITOA technologies to detect and better understand change and configuration of server environments. The product is available as SaaS or on Premise deployment models.
I included NetMotion Wireless, building some cool technology to better track and manage enterprise wireless quality and delivery of services. The product uses a small agent to measure performance and usage. As wireless connectivity becomes more critical understanding carrier and hardware choices will increase in importance.
Colin Fletcher included Nethink, who builds ITOA technology which rely on end user collected information from desktops to help with problem resolution, configuration issues, and some compliance use cases. Many issues today reside on the end user devices and there are few technology providers who provide the client side visibility needed. (Other popular choices aside from Nexthink include Aternity and LakeSide Software)
Colin Fletcher also included Sumo Logic, a SaaS based offering analyzing machine data (logs) similar to popular tools such as Splunk, but also providing a unique real-time architecture. The product has interesting elements of anomaly detection as well as the ability to coach the system’s auto categorization.
Finally I also included ThousandEyes who’ve taken a commoditized market of synthetic monitoring and made it interesting again by layering on additional data sources about the internet path (BGP). This provides added visibility and information to these synthetic transactions making them much more useful to those running or relying on SaaS (which is pretty much everyone these days).
There is much more detail and analysis in the research, including why they are cool, challenges they face, and who should care about these technologies and technology providers. Clients can access the research at the link below:
Category: Analytics APM IT Operations Logfile Mobile Monitoring Pick of The Week SaaS Tags:
by Jonah Kowall | April 25, 2014 | 2 Comments
When I mentioned the “pick of the week” idea to my awesome manager John Enck (@johnenck if you can get him to tweet I’ll buy you a beer) he said “be careful if you commit to doing it weekly, you have to do it”. I assured him this was not an issue, and then of course I missed a week. It will happen, but I’m trying to get at least two or three of these, per month. On to the good stuff….
GroundWork was one of the first companies to package up open source monitoring components into a commercial offering, which includes the ever popular Nagios engine, into a supported, tested, and well maintained monitoring product offering. This created a natural path for those using Nagios, who wanted support, a more advanced product, and a consistent deployment model. Over the years GroundWork has become quite a different animal, with advanced portal technology, topological awareness and discovery, event correlation, and the ability to scale the solution for large and demanding environments. The next step in the maturation process, was driven by customers who wanted to build solutions incorporating the monitoring data, GroundWork then re-built the core architecture with a robust API layer allowing for diverse use cases of monitoring data, including custom portals or other mashups. The evolution of the company moved them further into the concept of unified monitoring they improved the ability to monitor network devices, hypervizors and have been focused on support for multiple public and private cloud platforms. This unified monitoring platform has been gaining momentum and the customers have been shifting from Nagios up sells into those which need an extensible monitoring platform based on open standard components.
The main reason I wanted to highlight them is based on research we published towards the end of last year Colin Fletcher and I published (http://blogs.gartner.com/jonah-kowall/2013/11/12/unified-monitoring-note-presentation-and-client-interest/) highlighted the need for these unified monitoring tools combined with log analysis. GroundWork is one example of a company who has done this. They have taken the open source ELK stack (highlighted in this blog post : http://blogs.gartner.com/jonah-kowall/2014/04/13/monitoring-technology-pick-week-of-april-7th-elasticsearch/) and incorporated it into the GroundWorks solution. Today it’s a portlet within the product, and there isn’t the management tooling needed around the ELK components, but this is a beta product. The future product should better manage the data, and integrate the search and reporting components coming from the ElasticSearch tooling. The feedback from the customer base has been clear, people want this pairing. In fact they did a survey across 400 of their users:
Key Study findings:
- 37 percent of unified monitoring users are reviewing their IT logs via manual text search; 33 percent are already using log analysis software
- 96 percent find the ability to combine log field data with other monitoring event data into a single search tool and/or dashboard important
- 42 percent of users claim they do not have enough time to start analyzing their IT log data; 18% say the cost is too high
These study findings echoed Gartner’s latest report, “Modernize Your Monitoring Strategy by Combining Unified Monitoring and Log Analytics Tools,” on how to better manage today’s complex and dynamic IT environments.
This is quite close to our findings at the last Gartner Data Center Conference across 114 attendees who responded to our audience polling during our presentation titled “The Elusive Promise of Unified Monitoring: How to Monitor Infrastructure and Applications”.
- 22% had centralized log analysis tooling
- 46% had tactical or dispirit log analysis
- 39% had no log analysis
This is music to the log analysis vendors ears
Please leave comments here on on twitter @jkowall.
Category: Analytics IT Operations Logfile Monitoring OLM Pick of The Week Tags:
by Jonah Kowall | April 15, 2014 | 2 Comments
In addition to the Unified Communications coverage I also participated in research around IT Operations management software. I contributed a write-up of a small innovative Boston area company Centerity, who provides unified monitoring technology. The simplicity and focus of organizations to simplify and reduce the cost of availability monitoring is core to the offering, with some more advanced features and interesting collection technology for breadth of coverage (including technologies such as SAP HANA and CCMS). Centerity provides a robust product, with flexibility leveraging multiple types of data acquisition including the popular open source Nagios plugin (or check) compatibility.
Ronni Colville and Milind Govekar included the German company Arago, I have spent time with them as well, and am very impressed what they are doing in bringing automation and analytics together in order to learn the behaviors and actions of sysadmins. This differentiated approach makes a lot of sense, and customers have indicated positive results, but as with all automation work is required up front.
Ronni Colville and Milind Govekar also included Innovise a UK based automation company with a programmable and robust library meant to break down the silos of automation commonly observed within enterprises today. Innovative features include cost measurement and efficiency measurement of the automation tasks, and analytics integration with a custom developed complex event processing (CEP) engine.
Ian Head and Jeff Brooks highlighted Navvia for their service management offering with good workflow tools to model and run IT processes.
Finally Jarod Greene and Ronni Colville highlighted Vistara who offers a SaaS based unified platform covering configuration management, patch management, remote system access, and basic features including orchestration and monitoring. The product has a single user interface and views of the associated components.
Of course my brief writeups don’t touch on the depth of the research, including challenges and target persona who should evaluate these technologies as part of a IT Operations Management software strategy. Sorry the in depth research is for our subscribers:
Don’t worry we are hard at work to deliver the Cool Vendors in APM this year, we expect that to hit shortly.
Category: Analytics Big Data DevOps IT Operations Monitoring SaaS Tags: