by Jonah Kowall | June 30, 2014 | 3 Comments
Always one of the more enjoyable conferences for me to attend, I don’t get worked as hard as Gartner conferences which are also really enjoyable, but I spend time doing the educating versus listening to other smart people. Velocity is a practitioner focused conference and is very geeky (in a good way for those of us who are pretty deep technologists). I’ll highlight some of the great sessions I attended and other technologies I discovered.
The conference is put in my a competitor of course, since we do our own events, but they had over 2,400 registered attendees and over 100 sponsors. There seems to be growth here, and the conference is always larger. Here are some session bullets I found interesting. You’ll notice a pretty wide spread from performance of the front-end, application middleware, and backbends.
Webpagetest deep dive – http://twitter.com/PatMeenan – http://cdn.oreillystatic.com/en/assets/1/event/113/WebPagetest%20Power%20Users%20Presentation.pdf
This is a great open source tool for measuring and diagnosing front-end performance. I’ve used the tool, but had been mostly ignoring it since it wasn’t evolving too much. That was quite a mistake since it’s evolved considerably since I’d last really used it.
- Good to dig into the new features in the advanced settings tab
- Run more than one test when measuring, always
- Very cool advanced visual comparison
- Filmstrip view has been improved
- Can do mobile runs, which show it in a mobile browser (very cool)
- Browser CPU usage stats can be overlaid on waterfall
- Can export tcpdump (use in wireshark or cloudshark)
Docker – https://twitter.com/kartar
Content was good for those who hadn’t used docker. I’ve done some basic work on it, and find it interesting, but also quite basic in nature. Some of the discussion hit on issues around security, support for other containers, and overall limitations in this immature, but evolving technology.
- The room was packed.
- Dockerfile instructions (kind of like a init.d script), I hadn’t used these before, but they are critical when using docker at scale.
RUM Comparison and Use Cases – https://twitter.com/bbrewer https://twitter.com/bluesmoon
The team at SOASTA presented a non-vendor biased view of RUM. While I found the landscape they laid out basic, and partially incomplete, but still a valiant effort by the team there. The key takeaway is more users are trying to tie business metrics to RUM data, for example e-commerce companies tying and analyzing revenue to users and performance.
Google – Jeffrey Dean (http://research.google.com/pubs/jeff.html)
Interesting discussion by Google’s Jeffrey Dean, the most interesting part I found was his analysis of data replication to extra nodes to reduce latency, and of course the multiple-write technologies many use to deal with that replication closer to the source of the data
Keynote systems – https://twitter.com/keynotesystems
Ben investigated what page load times look like, some of the interesting data he presented was what fast was varied by country and other demographic data. He also used the video capture features of webpagetest.
Speedcurve – https://twitter.com/MarkZeman - Blog and Video of the Keynote - http://speedcurve.com/blog/velocity-responsive-in-the-wild/
This was one company I hadn’t heard of (well more like a 1 man show), interesting company which does a nice frontend and comparative analysis using a webpagetest backend. Some notes:
- Sits on top of webpage test
- Competitive benchmarking, runs once a day, multiple runs
- Complements RUM
- Shows filmstrips
- Formats the data much better
- Helps find savings, etc
- Can get to webpagetest views as well
- Showed some interesting research on visualizing data
Understanding Slowness – http://www.twitter.com/postwait : https://speakerdeck.com/postwait/understanding-slowness
Always a highlight of Velocity for me, Theo is a unique and extremely bright individual. He always brings good analysis and practical content, he’s an ops guy through and through. There is no marketing or other fluff you often see with content at conferences. Some high level notes:
- Document your architectures
- Have a plan
- Use redundant vendors, don’t put your eggs in one basket (easier said than done, but for some things a good idea)
- Measure latency (performance
- Quantiles over histograms
- Observation – takes state, watches
- Dtruce, truss, tcpdump, snoop, sar, iostat, etc
- Synthesis – Run a test to enable diagnostics (replicate an issue)
- Manipulation – test hypothesis
Some Simple Math to get Some Signal out of Your Ops Data Noise – https://twitter.com/tboubez - http://www.slideshare.net/tboubez/simple-math-for-anomaly-detection-toufic-boubez-metafor-software-velocity-santa-clara-20140625
Not sure I’d call this simple math at all, but here is a very new company we awarded a Cool Vendor this year for APM and ITOA who focuses on ITOA use cases with their solution. They have a lot of growing up to do as a company, but they have some compelling analytics technologies. Mr Boubez applies and brings the readers through a journey of math, what we’ve tried (which doesn’t work too well) and some techniques which do work much better. Clearly worth a look.
- Gaussians don’t work with data center data
- Use histograms (even though Theo says they may not be the best visual analysis tool)
- Kolmogorov-Smirnov test allows for better data
- Handles periodicity in the data
- Box Plots / Tukey
- Doesn’t rely on mean and stddev
- IQR moving windows
Sitespeed.io - https://twitter.com/soulislove
Early phase tool for running rules against frontend optimization, which is a cool idea. I’m going to wait for lab time until version 3 written in node.js comes out in 3 weeks
Category: APM Monitoring Trade Show Tags:
by Jonah Kowall | June 30, 2014 | 5 Comments
Just wanted to share some news and new research. We’ve seen a lot of changes in the market within both APM (lots of new entrants in the past month, and many more coming in the next few months). Expect some more reviews and content on the blog for some of the more interesting vendors. I’ve been posting less this month since I’ve not been home much. The first half of June was spent attending two Gartner infrastructure and operations management conferences one taking place in Berlin, and the second in Orlando. I just arrived back from the Velocity conference, which I will be posting about this week. I wanted to share some of the new research and content we presented.
The research note led by Colin Fletcher which I was able to co-author published June 24th, the presentation was given earlier this month, both titled:
Apply IT Operations Analytics to Broader Datasets for Greater Business Insight – http://www.gartner.com/document/2778217 <- (subscribers only)
The research highlights the flexibility and use cases for leveraging ITOA investments to combine IT and non-IT data to provide insight and extract relevant metrics across systems. Critical elements include the ability to explore, dream, and learn from the data collected driven by combining the right data sets. The following Strategic Planning Assumption: By 2017, approximately 15% of enterprises will actively use ITOA technologies to provide insight into both business execution and IT operations, up from fewer than 5% today. Highlights this trend, and the need to deliver better analytics capabilities to the business.
We explain the different data sets which must be collected, and how insight can be derived. We also make corollaries to other IoT and IT/OT research at Gartner.
Sample List of ITOA Vendors
AccelOps; Appnomic Systems; Apptio; Bay Dynamics; BMC; Evolven; Hagrid Solutions; HP; IBM; Loggly; Metafor Software; Moogsoft; Nastel Technologies; Netuitive; Nexthink; OpTier; Prelert; Savision; SAS; SL; Splunk; Sumerian; Sumo Logic; Teleran; Terma Software Labs; VMware; XpoLog
Expect a post of a cool vendor of the week tomorrow, and another one next week. We’ll be moving from a focus on ITOA log analytics vendors towards Mobile APM vendors.
Category: Analytics IT Operations Tags:
by Jonah Kowall | June 4, 2014 | 3 Comments
Thanks for those who came out to the Gartner IOM conference in Berlin which wrapped up yesterday. Interesting things happening and discussions with clients and attendees. We have the US version of this conference next week in Orlando and I will be there!
Posting this from lovely Budapest.
Keeping on the theme of log analytics, which comes up a lot in conversations related to both unified monitoring (infrastructure availability), and APM we are seeing this technology as particularly applicable across monitoring disciplines and silos within organizations.
There is yet another company we haven’t highlighted which has been flying under the radar since being founded in Israel in 2003 and has been building index and search technology which differentiates itself by doing deeper automated analysis of the data before the user is involved in querying the data.
The software discovers patterns and problems within the log data and is more proactive than other log analytics tools used by IT Operations. Data is searched by the user, and additional layers are placed on top of the data providing context. There are no rules needed to enable these features. The product has its own indexing and storage system, but also can support Hadoop data stores as well.
Version 5.0 was recently launched which improves upon the user interface slightly and also adds in native support of logstash (you can read my other posts on the ELK stack).
Once additional data is found you can add that insight to the query:
In the Screenshot below you can see some of the unique ways data is layered within the visualization timeline.
The product could use a more modern and usable user interface, easier implementation of collection agents and technologies to help get data into the system. These basic functions would help exploit a very impressive analytics engine.
The company has not had particularly good visibility due to being a self-funded technology focused company, they have not invested in marketing or sales efforts to date. This doesn’t mean they haven’t done well, they have some large installs of the technology which are impressive. This has resulted in less growth than competitors have had, but they are looking to change that.
Category: Analytics IT Operations Logfile Monitoring Pick of The Week Tags:
by Jonah Kowall | May 29, 2014 | 9 Comments
Gartner is beginning the research process for the 2014 Magic Quadrant for Application Performance Monitoring. Will Cappelli and myself will be running the research as we have for the last few years. The release is currently planned for the 4th quarter of 2014, and we will be sending out surveys to vendors next week. Based on the data below, if you believe you qualify for the research please get in touch with myself via email or Twitter (@jkowall):
The 2013 Magic Quadrant saw minor changes around the weighting and importance of analytics and mobile. The 2014 research once again sees importance in these two critical trends, but also a focusing on SaaS delivery being essential as enterprises continue to adopt various public cloud delivered applications, services, and infrastructure.
Market Definition Is Unchanged
In the 2014 Magic Quadrant, the market definition will remain unchanged from the 2013 version (see “Magic Quadrant for Application Performance Monitoring”). While we have made changes in the application definition. Gartner defines an application as a software program or group of programs that interact with their environment via defined interfaces and which are designed to perform a specific range of functions. They may be end user facing, presenting a user interface, or provide the interface between two applications themselves. Applications are not (normally) wholly independent as they typically require an operating system or multiple operating system instances to manage their use of physical and logical computing, storage and network resources within a data center, or provided by 3rd parties.
Gartner defines APM as having five dimensions of functionality:
- End-user experience monitoring (EUM) — The capture of data about how end-to-end latency, execution correctness and quality appear to the real user of the application. Secondary focus on application availability may be accomplished by synthetic transactions simulating the end user.
- Application topology discovery and visualization — The discovery of the software and hardware infrastructure components involved in application execution, and the array of possible paths across which these components communicate to deliver the application.
- User-defined transaction profiling — The tracing of user-grouped events, which comprise a transaction as they occur within the application as they interact with components discovered in the second dimension; this is generated in response to a user’s request to the application.
- Application component deep dive — The fine-grained monitoring of resources consumed and events occurring within the components discovered in application topology discovery and visualization dimension. This includes server-side components of software being executed.
- IT operations analytics — The combination or usage of techniques, including complex operations event processing, statistical pattern discovery and recognition, unstructured text indexing, search and inference, topological analysis, and multidimensional database search and analysis to discover meaningful and actionable patterns in the typically large datasets generated by the first four dimensions of APM. Additionally these data sets are increasingly being analyzed for not only operational information, but business and software analytics.
Vendors will be required to meet the following criteria to be considered for the 2014 APM Magic Quadrant. In comparison to 2013, we have adjusted numerical thresholds.
■ The vendor’s APM product must include all five dimensions of APM (EUM, application topology discovery and visualization, user-defined transaction profiling, application component deep dive, and IT operations analytics). The deep-dive monitoring capabilities must include Java and .NET, but also may include one or more key application component types (e.g., database, application server). The solution must include user-defined transaction profiling, IT operations analytics technologies applied to text and metrics collected by the other four dimensions.
■ The APM product must provide compiled Java and .NET code instrumentation in a production environment.
■ Customer references must be located in at least three of the following geographic locations: North America, South America, EMEA, the Asia/Pacific region and/or Japan.
■ The vendor should have at least 50 customers that use its APM product actively in a production environment.
■ The vendor references must confirm they are monitoring at least 200 production application server instances in a production environment.
■ Some features of the APM offering must be available via a SaaS delivery model. This offering must be delivered directly from the vendor.
■ The product must be shipping to end-user clients for production deployment and designated with general availability by July 15th 2014.
■ Total revenue (including new licenses, updates, maintenance, subscriptions, SaaS, hosting and technical support) must have exceeded $5 million in calendar year 2013.
In addition to these criteria, we will be evaluating the vendor’s ability to cross multiple buying centers, as well as its ability to target specific verticals as validated by reference customers. Detailed criteria and subcriteria will be published along with the final research later this year.
While a vendor may meet the inclusion criteria for the APM Magic Quadrant, placement within the finalized Magic Quadrant will depend on its scoring in a number of categories. Ratings in these categories will be used to determine final placement within the 2014 APM Magic Quadrant. The 2014 evaluation criteria are based on Completeness of Vision and Ability to Execute.
Completeness of Vision
Market Understanding: This criterion evaluates vendor capabilities against future market requirements. The market requirements map to the market overview discussion and look for the following functionality:
■ EUM, including real and synthetic availability testing
■ Runtime application architecture discovery
■ User-defined transaction profiling
■ Application component deep dive
■ IT operations analytics for problem isolation and resolution
■ IT operations analytics to answer questions about software or business execution■ Ability to address the mobile APM market
Marketing Strategy: We evaluate the vendor’s capability to deliver a clear and differentiated message that maps to current and future market demands, and, most importantly, the vendor’s commitment to the APM market through its website, advertising programs, social media, collaborative message boards, tradeshows, training and positioning statements.
Sales Strategy: We evaluate the vendor’s approach to selling APM to multiple buying centers. We also evaluate the vendor’s ability to sell in the appropriate distribution channels, including channel sales, inside sales and outside sales.
Offering (Product) Strategy: We evaluate product scalability, usability, functionality, and delivery model innovation. We also evaluate the innovation related to delivery of product and services.
Business Model: This is our evaluation of whether the vendor continuously manages a well balanced business case that demonstrates appropriate funding and alignment of staffing resources to succeed in this market. Delivery methods will also be evaluated as business model decisions, including the strength and coherence of on-premises and SaaS solutions.
Vertical/Industry Strategy: We evaluate the targeted approaches in marketing and selling into specific vertical industries. Commonly, APM solutions are bought and targeted toward the financial services, healthcare, retail, manufacturing, media, education, government and technology verticals.
Innovation: This criterion includes product leadership and the ability to deliver APM features and functions that distinguish the vendor from its competitors. These include unique approaches to application instrumentation, mobile visibility, and catering towards the increased demands of continuous release. Specific considerations include resources available for R&D, and the innovation process.
Geographic Strategy: This is our evaluation of the vendor’s ability to meet the sales and support requirements of IT organizations worldwide. In this way, we assess the vendor’s strategy to penetrate emerging markets.
Ability to Execute
Product/Service: Gartner evaluates the capabilities, quality, usability, integration and feature set of the solution, including the following functions:
■ Day-to-day maintenance of the product
■ Ease and management of deploying new APM
■ Ease of use and richness of functions within the product
■ Product deployment options and usability
■ Integration of overall APM-related portfolio or unified APM offering
Overall Viability (Business Unit, Financial, Strategy and Organization): We consider the vendor’s company size, market share and financial performance (such as revenue growth and profitability). The leadership within the company in terms of people, what employees think of the leadership, and the ability to drive the company forward. We also investigate any investments and ownership, and any other data related to the health of the corporate entity. Our analysis reflects the vendor’s capability to ensure the continued vitality of its APM offering.
Sales Execution/Pricing: We evaluate the vendor’s capability to provide global sales support that aligns with its marketing messages; its market presence in terms of installed base, new customers, and partnerships; and flexibility and pricing within licensing model options, including packaging that is specific to solution portability.
Market Responsiveness and Track Record: We evaluate the execution in delivering and upgrading products consistently, in a timely fashion, and meeting road map timelines. We also evaluate the vendor’s agility in terms of meeting new market demands, and how well the vendor receives customer feedback and quickly builds it into the product.
Marketing Execution: This is a measure of brand and mind share through client, reference and channel partner feedback. We evaluate the degree to which customers and partners have positive identification with the product, and whether the vendor has credibility in this market.
Customer Experience: We evaluate the vendor’s reputation in the market, based on customers’ feedback regarding their experiences working with the vendor, whether they were glad they chose the vendor’s product and whether they planned to continue working with the vendor. Additionally, we look at the various ways in which the vendor can be engaged, including social media, message boards and other support avenues.
Category: APM IT Operations Monitoring Tags:
by Jonah Kowall | May 23, 2014 | 31 Comments
Prior to Gartner as an end user I watched the rise of companies like Precise software (I was a customer), and Optier. I’m not going to rehash all of the interesting twists of Precise you can read them here : http://en.wikipedia.org/wiki/Precise_Software the company was once worth well over $500m with some superb technology. With a successful $140m IPO in 2000, they were a high flyer. Through the changes of being bought twice after the IPO the business execution and leadership began to suffer. This was something they did not recover from, eventually resulting in a fire sale of the technology, where it was picked up for pennies on the dollar by Idera last year. We have high hopes Idera can rebuild the Precise technology and meld it with CopperEgg to create a compelling APM solution (of course SaaS and on premise).
Similarly Optier, another great technology innovator in APM, pioneered the use of advanced analytics in the space, didn’t suffer from vision or technology. The issues were once again the leadership outside of the technical parts of the organization. Having raised well over $100m in the last 9 years the company never kept pace with the changes in the market. Optier has finally closed their doors as of yesterday, really sad to see that the transformation from on premise heavy enterprise software to SaaS was not happening fast enough to fix the cash situation. Hopefully someone will acquire some of these great assets and possibly see the transformation through.
I’m starting to see yet another story follow in this similar path, but nothing I can write about yet… We shall see.
It’s certainly been an interesting 3 years at Gartner covering the APM space. Upon joining the company there was some innovation happening and the rise of two of the most well regarded and asked about companies in the APM space. Both AppDynamics and New Relic have been major disruptors in terms of making APM easy, inexpensive, and effective. Regardless of the depth and technical expertise of the companies (which they both clearly do have) it was about the delivery model and execution from a sales, marketing (which matters quite a bit), and senior management vision perspective. These two companies are both young, but the experience and leadership speaks for itself. Both having valuations well over $2b, and a path towards IPO, when they should they need the funding, they are the new generation of APM companies pushing the envelope.
I’m not discounting the technology and size of Compuware, they are going through a pretty drastic transformation themselves, and truly focusing on APM as a core business. Fixing some of the prior mistakes on the business side has been a journey for them, but there is no question on the vision and technology. Expect some disruptive capabilities from Compuware, they are not in follow mode, while many if not all other APM companies are.
There are few other companies in the space truly leading and innovating, but many are trying to change and catch up. The question is with the rate of innovation and change, can anyone actually accomplish that, especially with the level of growth and capital commanded by the leaders in the market.
Please comment here on @jkowall on twitter
Category: APM IT Operations Monitoring Tags:
by Jonah Kowall | May 21, 2014 | 2 Comments
Wanted to give a heads up that we’ve updated a note which was published about 18 months ago around open source and freeware monitoring tools.
Clients only link:
The document covers basic information on the following technologies:
Minisuites : Ipswitch, ManageEngine, Solarwinds
Free Monitoring from Artica ST (Pandora FMS), GroundWork, Icinga, op5, Opsview, Paessler, SevOne, Spiceworks, Splunk, Torch (Graylog2), VMTubro, XpoLog, Zabbix, Zenoss
SaaS monitoring from AppDynamics, AppFirst, Boundary, Datadog, GFI Software, Loggly, New Relic, ScaleXtreme (Citrix now), Splunk, and Sumo Logic
Changes from late 2012 until today:
Since writing our previous research on how to leverage free and low-cost server, network and storage monitoring tools, we’ve seen the following changes in the market:
- AppDynamics — Lite offering provides always-on monitoring, where the previous product was only functional during use
- Artica ST — Pandora FMS, open-source offering
- Icinga — Open-source offering
- Loggly — Relaunched offering targeted at a wider user community
- Torch — Graylog2, a new offering
- XpoLog — Free offering for log analysis
Compared with our earlier research about free and low-cost server, network and storage monitoring, we removed vendors that, unless otherwise noted, left this market:
- Correlsense — Low-cost offering
- Jinspired — Lack of adoption across Gartner client base
- Net Optics — Lack of adoption across Gartner client base
- Quest Software (now part of Dell Software) — Low-cost offering
- VMware — Low-cost offering lacked Gartner client adoption, but the solution is being retooled and better integrated with the VMware vCenter Operations Manager offering.
Sorry I forgot to include a few vendors and solutions in the note which will be in the next revision:
- CA Nimsoft Monitor Snap is a freemium offering covering server monitoring for up to 30 servers without cost. They launched the offering about 7 months ago and have had over 2500 installations of the product!
Category: APM IT Operations Logfile Monitoring SaaS Tags:
by Jonah Kowall | May 17, 2014 | 2 Comments
Sorry no cool vendor this week, instead I’m going to cover some other “cool” stuff coming from a company we don’t normally think of as particularly “cool”. Microsoft is innovating in the systems management space, and creating unique technologies with a broad reach. Microsoft has a significant install base of Operations Manager, they come up very regularly on client inquiry, hence I’ve written several research notes focused on this technology.
I was pleased to make it out to Microsoft TechEd this year, after not making it since 2011 (too many vendor shows, need to rotate). Although this year was an off year in terms of major platform launches, especially around System Center Operations Manager (what was called SCOM previously). Microsoft still announced a slew of new cloud based products around Visual Studio highlights included mobile development based on Apache Cordova to allow for building universal Windows apps (Store and Phone) across Microsoft platforms, as well as iOS and Android.
Within Visual Studio online, the updates around Application Insight were particularly interesting in terms of APM, one of my focus areas. Additionally the announcement of improved functionality and capabilities within System Center Advisor are of interest to my coverage of ITOA and APM. Let’s dig a bit deeper…
I can now more publicly talk about the changes which Microsoft has been making over the last 14 months internally, re-organizing the teams which consisted of the Avicode acquisition (Microsoft’s APM technology) underneath the Visual Studio (Developer Division) from being aligned with the System Center teams. This organization change and strategy change facilitated Microsoft to create a developer focused APM product consisting of instrumentation of Java and .NET along with embedded tools within Visual Studio for creating custom instrumentation.
Through this change Microsoft has launched a SaaS only APM product, which over time will be fed into Operations Manager. They have built specific tooling which integrates into Visual Studio making it easy for developers to write their own instrumentation. This instrumentation can include any metric data or custom messages which are sent to the online service (currently completely free of charge as a preview - http://msdn.microsoft.com/en-us/library/dn481095.aspx) and can be viewed, reported upon, and analyzed. Over time this will provide a broader understanding of software analytics across Microsoft and non-Microsoft technologies and platforms. While there are limitations in the preview, there is a lot which can be done today.
The online portal also leverages the same Global System Monitor (GSM) synthetic availability monitoring which can monitor a single URL or a set of steps which a user may traverse as they are using a web application. This is already available for free to users of Operations Manager (with some limitations). This is a similar synthetic testing capability you often see offered by Compuware Gomez, Keynote, and over 3 dozen other companies. These technologies are now part of developer centric tools in Visual Studio Online as well.
The APM components can also monitor servers or Azure components in terms of usage and performance information by leveraging built in instrumentation or agents.
Short video : http://channel9.msdn.com/Series/Application-Insights-for-Visual-Studio-Online/Application-Performance-Monitoring-APM–Diagnostics-with-Application-Insights
System Center Advisor
This System Center component was previously uninteresting from my coverage perspective, focused on identification of configuration problems of systems. The product has always been cloud only, and something part of the System Center suite.
Microsoft has changed things on the preview of this service, including several security and operations use cases. This begins with sending more collected data from System Center components to the online service, such as leveraging OpsMgr (SCOM) as a data source.
The preview includes capacity planning use cases across systems, both physical and virtual, but more interestingly allows for log analytics technology. Event log data is fed from OpsMgr, but I expect this to expand over time. The log analytics today are based on Elasticsearch on Azure under the covers, but the presentation is all web based. The language and analytics are pretty basic, but useful. The speed of analysis is very good based on the demos I have seen, but I have yet to get it up and running in my lab. Regardless, this is the start of some interesting technology from Microsoft. During the preview this is completely free of charge, and you don’t have any costs for storage since the data is all stored on Microsoft Azure.
Video from TechEd (Fast forward to about 1:05:27 : http://channel9.msdn.com/Events/TechEd/NorthAmerica/2014/DCIM-B381#fbid=)
Hopefully this was helpful, feel free to leave a comment or hit me up on twitter @jkowall
Category: Analytics APM Big Data IT Operations Logfile Mobile Monitoring SaaS Trade Show Tags:
by Jonah Kowall | May 9, 2014 | 9 Comments
Getting this one done for next week, as I will be at Microsoft’s TechEd conference Monday-Wednesday in Houston. If anyone wants to meet up just hit me up on Twitter, I’ll be in meetings and sessions.
There is no question that the most popular vendors which come up regularly for basic availability monitoring are those which offer low cost, easy to use, and effective products that monitor components health for availability. This has been the main reason folks like Microsoft, Solarwinds, and ManageEngine come up very often in monitoring inquiry. Building a product which focuses on ease of use is difficult, this includes the entire experience from from download, POC, implementation, purchasing, day to day use, and maintenance. As engineers we tend to over-engineer, software vendors are guilty as well, the bloated products are designed by listening to each customer request and implementing solutions without stepping back to reconsider the design and usability. The vendor highlighted this week has done a good job rebuilding their product in this manner.
AdRem Software is based in Poland, but has an office in New York, NY. They focus on building a unified monitoring offering, Netcrunch, which handles multiple use cases in the monitoring space. With the recent release of version 8, there has been renewed focus on creating a larger market relevance, and growing the client base. Founded in 1998 they have been selling monitoring products, but we have seen less adoption across our client base, probably due to a lack of sales and marketing investment. The customer base tends to be focused in Japan and Europe, with renewed focus and investment in marketing penetration may improve.
The product features include network monitoring (with topology), flow analysis, server monitoring (including virtualization technologies). Some unique features are agentless monitoring, but the use of ssh to get deeper server monitoring of linux variants (*BSD, MacOS) systems without software agents, which typically cause support pain. The product supports dozens of standard packaged applications found on servers, as most unified monitoring tools do. On the network side of things the product builds topologies of interconnected devices, and presents rich maps. These maps also present the flow data such as bandwidth and data usage of the end points including the servers.
I implemented the product in my lab, the download and install process was very easy and the wizard which includes configuration and auto discovery was very well done. The backend includes standard SQL, proprietary noSQL (for metrics), and a XML schema where state data is kept. This is a easy to implement solution with care paid to the design elements. The unified views include seeing multiple data sets in a single place:
Some of the issues with the product include that the tool is not web based (EDIT: They have a web UI, but it’s more of a second class citizen. It does look nice and shares the same look and feel) , there is still a windows application, making the data less available to other people within the organization who do not have the client. The product is also focused more on the network use cases than server use cases, but it handled server monitoring quite nicely in my testing (see screenshot from my lab above). The company has been around for quite a while, but has remained small in terms of staff and investment. The product is priced quite attractively, in a similar manner so what you see for other low cost tools, such as those mentioned above.
Thanks for reading, please leave comments here or on twitter @jkowall
Category: IT Operations Monitoring NPM Pick of The Week Tags:
by Jonah Kowall | May 2, 2014 | 3 Comments
Lots of interest across the board in how to integrate and deliver in supporting a DevOps philosophy. In this third annual cool vendors in DevOps research there are five vendors, which help DevOps managers, app engineers, and release and cloud managers control the application life cycle.
This year I contributed Caliper.io an interesting monitoring company who focuses on monitoring user experiance of single page applications. Single page applications becoming increasingly popular amongst newer web application architectures, and they change the page interaction paradigm most monitoring and measurement is based on.
I also contributed a write-up for Data Dog, you might have seen other research where they were covered. This innovative SaaS offering provides glimpses at new ideas for event management, collaboration, and open monitoring systems. Of course they also have their own set of challenges.
Ronni Colville and Colin Fletcher contributed MidVision for their deployment software.
Colin Fletcher and Jim Duggan (See we have good DevOps collaboration) contributed Plutora, which provides a SaaS based ARA product with some interesting concepts around release management.
Finally Colin Fletcher included ZeroTurnaround who provide tooling around continuous testing to enable more efficient developer time. They also offer automated release software helping a continuous delivery cycle.
Clients will have access to the full research which highlights much more detail, why the approach of the technology provider is cool, the challenges they face, and who should be investigating or thinking about these innovative and emerging technology companies.
16 April 2014 G00262716
Category: DevOps IT Operations Mobile Monitoring Pick of The Week SaaS Tags:
by Jonah Kowall | April 30, 2014 | 3 Comments
We decided to rename our cool vendors research this year, since we regularly were featuring technologies which were not related directly to performance demands, but also included general analytics technologies applied to infrastructure and operations professionals needs. In the research for this year we saw a similar split between these technologies.
In the research we profiled several vendors:
Will Cappelli included Metafor Software, who provides ITOA technologies to detect and better understand change and configuration of server environments. The product is available as SaaS or on Premise deployment models.
I included NetMotion Wireless, building some cool technology to better track and manage enterprise wireless quality and delivery of services. The product uses a small agent to measure performance and usage. As wireless connectivity becomes more critical understanding carrier and hardware choices will increase in importance.
Colin Fletcher included Nethink, who builds ITOA technology which rely on end user collected information from desktops to help with problem resolution, configuration issues, and some compliance use cases. Many issues today reside on the end user devices and there are few technology providers who provide the client side visibility needed. (Other popular choices aside from Nexthink include Aternity and LakeSide Software)
Colin Fletcher also included Sumo Logic, a SaaS based offering analyzing machine data (logs) similar to popular tools such as Splunk, but also providing a unique real-time architecture. The product has interesting elements of anomaly detection as well as the ability to coach the system’s auto categorization.
Finally I also included ThousandEyes who’ve taken a commoditized market of synthetic monitoring and made it interesting again by layering on additional data sources about the internet path (BGP). This provides added visibility and information to these synthetic transactions making them much more useful to those running or relying on SaaS (which is pretty much everyone these days).
There is much more detail and analysis in the research, including why they are cool, challenges they face, and who should care about these technologies and technology providers. Clients can access the research at the link below:
Category: Analytics APM IT Operations Logfile Mobile Monitoring Pick of The Week SaaS Tags: