Jonah Kowall

A member of the Gartner Blog Network

Jonah Kowall
Research Vice President
3.5 years with Gartner
20 years IT industry

Jonah Kowall is a research Vice President in Gartner's IT Operations Research group. He focuses on application performance monitoring (APM), Unified Monitoring, Network Performance Monitoring and Diagnostics (NPMD), Infrastructure Performance Monitoring (IPM), IT Operations Analytics (ITOA), and general application and infrastructure availability and performance monitoring technologies. Read Full Bio

Cool Vendor Pick: Graylog

by Jonah Kowall  |  January 27, 2015  |  1 Comment

There has been a lot of interest over the last 12 months in products based on open source for monitoring and management. In the area of log analysis, Elasticsearch has been a player which has strengthened with the growing investments in the space. The awareness has been greatly increased in the past year. While the popular Kibana frontend to Elasticsearch has been the main GUI. These two projects are paired with Logstash for ingest, combined these make up the ELK stack. There is another great open source project to take a look at. The focus of this weeks write-up is on this alternative to ELK.

The company behind Graylog is Torch out of Hamburg Germany (https://www.torch.sh/) they do consulting around the product. The open source site is https://www.graylog2.org/ the project is an ElasticSearch based product, but unlike Kibana it also has additional features:

  • Take inputs directly into the Graylog server processes
  • Output from the server to multiple backends based on output plugins, right now the main one is for ElasticSearch
  • Alerting based on matching or other criteria are integrated into the Graylog project along with a stream processing capability

The supported data comes in the form of plugins which include syslog or GELF (Graylog Extended Log Format) or other plugins. GELF allows for several enhancement from typical syslog.

  • No length limitations for messages (syslog is 1024 bytes)
  • Data types (string, number)
  • Variation in syslog implementation
  • Compression via gzip or zlib

The nice thing is that you don’t need to do any extractions once the messages have been added via GELF. They have 72 such plugins including many GELF libraries (See: https://www.graylog2.org/supported-sources?perPage=100)

On the site you can sign up for a self-service trial of the software, I did this in early November, there has been another release since then. These screenshots may be a little out of date:

Image1

 

Image2

There can be multiple backend nodes connected to the frontend. There is some good management within the GUI of the connections. The main dashboard when you login shows you information about the cluster, components, and the status. There is a query box.

Image4

 

Some other administrative views. Many of the log management tools, especially in open source neglect the day to day maintenance and administration. Being a systems and operations person myself I always dig into the internals needed for day to day administration. Graylog has a lot of what’s been missing across open source ElasticSearch management tools. Some additional views:

 

Image19

 

Image20

They have a data generator in the demo so you’ll see there are plenty of events in the data store.

Image14

Image4

Here is a query for smtp in the last 30 minutes.

Image5

 

You can also see inside the queries being sent to ElasticSearch, here are the JSON objects being passed to the engine:

Image6

Value breakdowns of the results quickly

Image7

 

Graylog has the notion of stream as illustrated below

Image8

 

What these are is a way to pass realtime rules against the data coming into the Graylog server before they are committed to elasticsearch, this real time processing provides a differentiator to Kibana based systems

Image9

Image10

 

Image17

 

Some sample sinks of what you can do with a proper eventing system, such as alerting:

Image11

The requisite dashboarding for any monitoring tool. Everyone loves dashboards, users are always asking for more dashboards, and they clearly do sell monitoring products. The value they provide are typically pretty limited. If the actual analytics in our software were better the computer would be doing the analysis versus a user looking at graphical displays of data. I digress…

Image12

You cannot share the same backend between Kibana/Logstash and Graylog since they use a different schema for the log data in ElasticSearch. Hence you’ll have to make a decision which tool you want to use when setting up ElasicSearch. Please leave comments or questions below on @jkowall on Twitter.

 

1 Comment »

Category: Analytics ITOA Logfile Monitoring OLM     Tags:

The Truth About Microsoft’s New Licensing Change – Guest Post

by Jonah Kowall  |  January 23, 2015  |  1 Comment

The press has been reporting today that Microsoft has introduced new licensing methods, which make Windows and Office look highly affordable. Here is an example article : http://www.computerworld.com/article/2867542/microsoft-touts-7-per-user-monthly-pricing-for-windows-subscriptions.html “Microsoft touts $7-per-user monthly pricing for Windows subscription packages”. Here at Gartner we have several experts who focus on Microsoft licensing, Dolores is one such person. I’ve asked her to do a guest post.

This is a guest post by Dolores Ianni, follow her on twitter @dmianni

“the suite costs between $7 and $12 per user per month”

 Let’s clear the smoke here. The price range mentioned above is for the ECS ADD-ON PRODUCT and therefore you would also need to pay for SA across the enterprise desktop components on premises so it does not represent the full cost of the desktop license by any stretch.  When you add them together you will see where this is well beyond the price of a cup of coffee.

The Full ECS USL is more than 4 times the cost of the ECS Add-on. When compared to traditional Office 365 enterprise desktop w/client OS by device the Full ECS USL has a premium so you have to consider all the features in the ECS Bundle. With deeper discounting for early adopters many customers will find the ECS cheaper and will jump on the Mega-bundle band wagon. It is strategic to MS as it attaches services like Intune and Azure AD Premium into accounts that otherwise would have no need at this time. The Windows Enterprise SA by-User options do have a compelling price point for those organizations that have BYOD and virtual desktops in an Enterprise Agreement.

There are some compelling aspects of the ECS depending on user requirements across the enterprise and many pros and cons to licensing the add-on subscriptions, for a more in depth analysis you’ll find new research publishing this quarter on the ECS which outlines all of the comparisons and issues to consider when selecting the traditional Office 365 vs. ECS bundle vs. the add-on options.

Thanks for reading, please leave comments here and or on twitter. Thank you to Dolores for her posting.

1 Comment »

Category: Uncategorized     Tags:

Cool Vendor Pick: Solarwinds Network Performance Monitor (DPI)

by Jonah Kowall  |  January 15, 2015  |  3 Comments

Happy new year to everyone, I just finished taking a week off post new years and I’m already hitting the road. On the research front we’ve published a new update to the RFP template for APM tools. Expect more refreshed and new content this month. Now on to the pick of the week (which is happening monthly).

SolarWinds released a pretty major update to their very popular Network Performance Monitor. This is one of the most popular tools out there to handle basic network monitoring needs, and it does so well. The product can also be extended to many other areas as well. Most implementations consist of SNMP polling of devices, syslog collection for faults, and many extend it towards netflow analysis. It’s also frequently coupled with SolarWinds Network Configuration Manager (NCM) for NCCM use cases. In the latest large update they did, they introduced some major new capabilities in version 11, these included the ability to do basic deep packet inspection (DPI). What this means is that SolarWinds can now observe packet level detail, instead of relying on vendors and devices to send it summarized data. The unique part of the solution is that it’s very flexible in the deployment models, the product can be placed on a dedicated device and span and tapped network traffic be sent to it (See : http://cdn.swcdn.net/creative/v11.4/images/landing_pages/Use-Case/img/screenshots/network_sniffer.png) or you can deploy the sniffer as an agent on your servers. This allows for the understanding of detailed application network traffic along with the ever important measure of latency, the key metric in determining network performance. Solarwinds is doing this for very low cost, disrupting a market which otherwise has not been inexpensive to enter. Let’s have a look at the new product and what it can do:

In order to evaluate SolarWinds has always made it very easy to download and try the product. After the download the setup wizard is run, which needs a Microsoft Windows host and a SQL server for the product install. It also uses IIS since the product is web based. These screenshots and my lab testing were done back in October, but I’m getting around to the posting now.

Image1

 

There are lots of features which can be selected which enhance the capabilities of the product, here are some:

 

Image2

 

Once you login to the web interface you are presented with the dashboard. They setup the basic monitors for the SolarWinds server itself. Quality of Experience (QOE) is the method which is using the packet inspection capabilities, you can see that measured here.

 

Image3

 

Here are the details on the QOE for SQL server

 

Image4

 

Here the deployment is explained in the product, as we’ll go through setting up the agent on other hosts:

 

 

 

 

Image5

 

Some more detail on the various data captured with the QOE sensor, you’ll notice it’s summarized but you can see the response time by protocol for the host. Here you’ll see MSSQL, CIFS, and HTTP traffic being captured and reported upon.Image6

 

 

Additional data around transaction volume, data volume, and other views are presented in the dashboard.Image7

When you install a new sensor you can manage them all from this view:

Image8

 

You can also tune what is being captured from each sensor, you’ll notice there is a large array of standard applications which can be recognized

Image9

 

When you want to add a node, you can do so completely remotely and the agent is pushed as you can see below

Image10

 

Many people will ask what the overhead is, and the answer it that is depends. In production workloads based on the level of traffic being send and what you are analyzing there will be some CPU usage by the monitoring, but with today’s processors and computers it shouldn’t be more than a few percent utilization (4-6%). It also shouldn’t block any IO or introduce any latency unless it’s under very heavy load.

 

Overall this product is a good first move for SolarWinds to commoditize the ability to do decentralized deep packet inspection for network performance. We expect others to move as well, and bring the cost of these tools and solutions down significantly.

Next up will be Greylog2!

3 Comments »

Category: IT Operations Monitoring NPM NPMD Pick of The Week     Tags:

Gartner Data Center Conference and a research update

by Jonah Kowall  |  December 12, 2014  |  1 Comment

I hope e veryone in the US had a good turkey day, sorry for the lack of updates. I’ve been on the road since before thanksgiving, last week we had a record breaking Gartner Data Center conference in las Vegas. The demand for monitoring related attendee one on ones was extremely high. I had over 40 one on one conversations with clients, which revolved around modernization, simplification, and improvement of the existing monitoring strategies. I also gave presentations which focused on Mobile APM and Web-Scale APM, along with running a roundtable containing over 30 end users who wanted to discuss monitoring initiatives. The roundtable was interesting as much of the conversation revolved around several key items:

  • Reduce noise by using ITOA or different approaches to event correlation, this naturally lead towards log analytics tools such as Splunk, VMware, and open source alternatives.
  • Modernize infrastructure availability monitoring (unified monitoring)
  • Reduce cost and tool sprawl
  • Increase or add application performance monitoring with a focus on end user experience and root cause analysis
  • PKI and metric reporting

After the conference ended I ran off to Asia to meet with end user clients, I’m writing this wrapping up a couple days of client meetings in Kuala Lumpur Malaysia. Many of the same trends were present in discussions here. Compuware (Dynatrace) has a big APM presence in Asia, and that includes Malaysia, with little to no competition they grew a good business here. Most users leverage DCRUM, but some are using or testing Dynatrace. While clients are happy with the Dynatrace tools I hear a constant set of complains revolving around complexity, the future plans around movement of easy features within Ruxit into the enterprise focused products has given many customers hope for an easier future. Many of the users of these tools here require managed services due to the complexity in getting value from the products. The partners providing services of course shield the customers from easier options including Compuware’s own Ruxit offering.

Next stop Hong Kong, and expect more great meetings with clients and prospects there.

Recently published research has included these two items (clients only, sorry)

Predicts 2015: IT Operations Management – One of the predictions in this research revolves around the changes in strategy and toolsets is already evident in my inquiry and client discussions “By 2018, 20% of IT operations organizations will abandon legacy monitoring tools for new monitoring architectures, up from 2% today.” more in depth analysis can be found in the research.

Critical Capabilities for Application Performance Monitoring Tools – This companion document rates APM offerings for specific capabilities by buyer persona. This is a complementary document to the Magic Quadrant, and is a must read for any APM buyer or user!

Finally, expect more upcoming research in the coming 6 weeks including an updated RFP template for APM buyers, a Mobile APM market guide, and hopefully some other gems based on what I can get done while everyone is on holiday :)

Thanks!

1 Comment »

Category: Analytics APM ECA IT Operations ITOA Logfile Mobile Monitoring     Tags:

New Research : Implement Mobile Application Performance Monitoring for App Analytics and App Quality Visibility

by Jonah Kowall  |  November 19, 2014  |  2 Comments

We’ve published new research on mobile application performance monitoring (APM), this research highlights some of the changes in the market over the last year since we published our first notes a year ago. There have been many interesting developments, along with a growing interest among our client base. Some of the changes include the melding of analytics technologies with APM visibility to create stronger platforms to understand end user experience and deeper device and application monitoring. We also analyze behaviors which cause user abandonment, and why quality is an issue within mobile applications.

Additional analysis of the buyers, and future state of the market are included in this research. We are planning on releasing a market guide for Mobile APM in Q1, which will be an update to the vendor landscape we did in late 2013. That research will also include market sizing estimates including usage, penetration, and revenues.

I will be presenting a brand new mobile APM content at the upcoming Gartner Data Center conference which complements this research note (http://www.gartner.com/technology/summits/na/data-center/).

Clients can access this new research here: Implement Mobile Application Performance Monitoring for App Analytics and App Quality Visibility – http://www.gartner.com/document/2913817

2 Comments »

Category: Analytics APM IT Operations Mobile Monitoring SaaS     Tags:

AppDynamics AppSphere 2014 – Wrap-up

by Jonah Kowall  |  November 15, 2014  |  4 Comments

Sorry for the delay, it’s been crazy for the last week to say the least. Last week I was able to attend the inaugural AppDynamics user conference, AppSphere. It was a short trip, I was unable to attend for the final day which included interesting breakout sessions (seems to be a theme the second half of this year).

AppDynamics made several key announcements during the conference including great revenue and employee growth, it’s no question this high flying company is on a path towards IPO as another competitor is currently doing. The main announcements included the December availability of AppDynamics fall 2014 release, which includes a rearchitecture of the platform, dubbed “Application Intelligence Platform”. They also announced the availability of over 100 platform extensions to extend and enhance the value of the platform. The kickoff keynote by founders CEO Jyoti Bansal and CTO Bhaskar Sunkara also announced the new ‘Application Analytics’  module (pricing has yet to be determined), a core capability to extract, correlate and and visualize ITOA and business impact data,  also available in December. This underpinning architectural change in the product will allow them to do much more sophisticated and diverse data collection and larger scale analytics (web-scale). (Related research published not long ago Monitoring Must Evolve to Meet Tomorrow’s Demands -http://www.gartner.com/document/2809724).

The other big expansion was the identification of the need to address gaps such as a broader unified monitoring platform. This is slightly different than Gartner’s current perspective on Unified monitoring, which is availability focused, AppDynamics has much grander plans for the platform (Client research : Modernize Your Monitoring Strategy by Combining Unified Monitoring and Log Analytics Tools – http://www.gartner.com/document/2615618).

Additional demos and announcements included a virtual war room feature which tries to replace the painful way we troubleshoot today, often with long and painful conference calls and meetings. This has been attempted before, unsuccessfully, but I wish them luck :) One announcement, lacking detail, was a beta version of the AppDynamics synthetic monitoring product. Synthetic monitoring is complementary to APM, and we’ve seen more of these products released within existing APM offerings in the past few months including (ex: AlertSite and New Relic); it will help remove market fragmentation and provide new use cases for synthetic monitoring.

Towards the end of day one I was asked to moderate the cloud panel, which was an interesting and lively discussion. While I would have liked a bit more bloodshed, it was interesting and I learned quite a bit myself from the participants. There was an interesting mix of vendors on the panel from IBM (SoftLayer), Redhat, Microsoft, Pivotal, and Google. Each had different perspectives on cloud (public and private), the way people were using it, and where it was heading. More detail of this conversation can be found here : http://www.appdynamics.com/blog/news/appsphere-speaker-panel-cloudy-with-a-chance-of-innovation/

I had several other meetings during my short stay, which were all engaging and interesting. There were many good customer sessions within my formal and informal discussions, these passionate customers were solving very real and complex problems using AppDynamics (in addition to other vendor) products. It’s great to see customers using APM technologies to solve business problems. Kudos to the AppDynamics team for a great conference.

Disclosure : AppDynamics paid Gartner to have me keynote and participate in the conference.

4 Comments »

Category: Analytics APM Big Data DevOps IT Operations ITOA Logfile Mobile Monitoring SaaS Trade Show     Tags:

More trouble in the waters: Correlsense falters, but does not fall

by Jonah Kowall  |  November 6, 2014  |  8 Comments

Unfortunately we’ve lost one of the innovators in APM this year already (Optier), and we have another trying to survive. When a company creates and builds unique and innovative technology that doesn’t equate to having a well run a successful business. Business execution encompases many facets outside of technology to build a successful company. About 2 weeks ago I was made aware of some issues at APM vendor Correlsense, before lashing out that the sky is falling we did some due diligence. You can find a couple of stories here in the press (which are not entirely accurate):

http://jewishbusinessnews.com/2014/10/28/correlsense-it-monitoring-software-maker-announces-massive-layoffs/

http://www.calcalist.co.il/internet/articles/0,7340,L-3643229,00.html (this one is hebrew, but google translate handles it fine)

Taking a step back, Correlsense has unique technology and has been around for a while. They have raised funding through several smaller rounds (http://www.crunchbase.com/organization/correlsense). In the latest change of management about 20 months ago they had brought in a veteran management team who came from very traditional enterprise software businesses. Most of the readers here realize what we’ve been writing for a while now, APM has undergone a radical change over the past few years, those which evolve win, and those which do not lose over time. The enterprise APM market is not the same as the enterprise monitoring market of 5 years ago. Things have changed, and the management team at Correlsense came from classical backgrounds, and refused to evolve (basically ignoring people like me, and believing they were correct). Correlsense was once in the APM Magic Quadrant, but they could not deliver a SaaS based product, hence when we made that a requirement in 2013 they were unable to qualify. The management team determined there was no demand for SaaS by the enterprise, and they were not going to put that on the roadmap. This was a grave mistake (one which Optier realized too late given the funding history) and prevented them from adapting to market changes. Additionally about 3 years ago with the massive growth in environments Correlsense had some scalability issues. These were solved with a large rewrite of the product on a modern non-relational data store. That is very hard to accomplish, and the engineering team deserves kudos for pulling that off, it was not without some bruises. Regardless of the unique and differentiated technology, the business execution was a major issue which did not change.

The result of this is that Correlsense has had layoffs, especially in the US headquarters, they have also moved away from the sales and business models which were created by the previous management team. The company has moved it’s headquarters back to Herzelia, Israel (where it came from) and they are still sorting out the CEO. Hopefully it will be one of the founders as he knows the market and the technology of APM. The result is a change of focus, away from the US and towards Europe where they have seen momentum. Customers in the US will still have support, and there will be a smaller presence in the US. While I don’t agree that Europe is a particularly strong market with the fragmentation, a good sales and support team can grow a business. They will be focusing on partners and channels to handle most of the sales versus a centralized sales model. They have a common investor with Centerity, and that partnership makes a lot of sense to grow both businesses. Centerity is a unified monitoring product we have covered in other research.

The company plans to build a SaaS platform, with a try and buy model. This will be the focus of the future company direction. The timing, naming and functionality have yet to be determined. They will focus on usability, and a modern application consumable by tablet and mobile devices. They will move away from large enterprise software deals towards lower cost subscription pricing. This is a major shift which many have tried, but we’ll see if Correlsense can pull it off. They will still have the current SharePath Enterprise product for focusing on difficult to monitor applications such as thick clients, Tibco, Tuxedo, WebSphere Message Broker, Oracle Forms, and other legacy applications. This is clearly not going to be the focus moving forward, but is of use to some buyers. With the combined IPM and APM capabilities they certainly have great intellectual property and a possibility to correct the course of the company, we wish them luck!

Comments below, or on twitter @jkowall please.

EDIT : 

Lanir Shacham who is the founder is now the CEO as of two weeks ago

8 Comments »

Category: Analytics APM Big Data IT Operations Monitoring SaaS     Tags:

New Research: Magic Quadrant for Application Performance Monitoring 2014

by Jonah Kowall  |  November 3, 2014  |  3 Comments

Building large research projects is a major undertaking, not just for the analysts, but the vendors who provide large amounts of data we request, and references we leverage aside from the well over 1,400 APM inquiries we take yearly from end user clients buying and implementing APM. We’ve published the 2014 edition of our yearly magic quadrant for APM. In fact it published a week ago, but I’ve been too busy to blog about it. Understand the technology cut off was in the summer, hence enhancements since then will be counted in the 2015 research.

There are already several press releases and licensed copies available from some of those press releases. We have three leaders in this research, and they have been extending their ability to cater towards APM specific buyers. These three leaders AppDynamics, Compuware (soon to be Dynatrace), and New Relic are moving in different directions, but using their depth in application instrumentation to lead the efforts.

We had one new entrant, SmartBear, who acquired Spanish APM company Lucierna and created a new interesting offering called AlerSite UXM. I have been testing the instrumentation in the lab recently, I will be bloging on it shortly. There have also been major shifts in the last year including CA, HP, IBM, and Microsoft. You can read about these changes, trends, and insight in the research.

There will be a new research note called a Critical Capabilities coming out in the next few weeks which will be a yearly companion to this Magic Quadrant. We are doing more of these across Gartner as they are very complementary in nature. The Magic Quadrant rates and evaluates a vendor’s ability to execute and the vision. The Critical Capabilities document (in our case) will outline several typical buyers or communities of APM users, and show the vendor’s ability to meet specific use cases. The use cases span typical and emerging criteria based on technologies and capabilities, these each have different importance to each buyer. Although I’d like to give away more than I have about this exciting upcoming note, I will hold off for a few weeks :). Think of this document as a technical and deep view of the vendor’s ability to meet these capabilities.

Please leave comments, thanks for reading, and we appreciate the vendors who participated in the research process.

3 Comments »

Category: Analytics APM Big Data IT Operations ITOA Mobile     Tags:

Compuware Perform 2014 Conference

by Jonah Kowall  |  October 24, 2014  |  2 Comments

I was invited and able to attend Compuware Perform in Orlando the week of November 6th for a couple days before heading off to Europe for 10 days of vacation, hence this blog post was slightly delayed (not to mention the Magic Quadrant and other research which is imminent now). The conference was great to attend as Dynatrace re-introduced the world to the brand, what old is once again new. The products have been renamed as follows:

 

Dynatrace

 

John Van Siclen who was previously the CEO of dynaTrace (acquired by Compuware in 2011) was the general manager for the Compuware APM business. He is now the General Manager for the Dynatrace company. The expectations are that this will be a standalone company when the privatization of Compuware closes in the fourth quarter of this year (more on that later). John’s keynote had some interesting points, Compuware has an impressive 89.9 net promoter score, and an online community of 84,000 people.

Dynatrace launched 4 key targets and messages for the solutions : Launch readiness, User analytics, Performance Engineering, and Production Monitoring

Dynatrace plugins will be open (as other APM companies have done) and they will be leveraging github : http://www.compuware.com/en_us/application-performance-management/products/user-experience-management/real-user-monitoring-web-and-mobile/plugin.html

Innovation will flow from Ruxit to Dynatrace including the network probe agents and the UI. I’ve already posted my thoughts and time with Ruxit, which were positive.

Dynatrace 6 was unveiled which included agent support for technologies such as NGnix, IBM IMS. There were improvements for TIBCO ActiveMatrix, TIBCO EMS, Java 8, iOS 8, NGINX, HBase, Cassandra, MongoDB and iPlanet. Dynatrace showed the new Web UI, which provides some high level views outside of the thick java based client they use today. I wasn’t too impressed with the new dashboard and UI, but it was an early-state prototype I wouldn’t have expected to see in a public forum. I am sure that will improve considerably before it ships next year.

Some nice improvements in the synthetic monitoring, including a free web test for lead generation (I still believe what you get for free with webpagetest is far more complete).  Investment in synthetic monitoring seems too heavy, this has been an issue with Compuware for quite some time.

I spent time in a couple sessions on DCRUM, and the product are still very network buyer and legacy application focused. I wasn’t too impressed with what I saw. I would have liked to have seen a more innovative approach to solving these problems, similar to the agent present in Ruxit, which is how we see the future of network analysis being done.

The one large piece missing from my discussions and the messaging was analytics, while Dynatrace has lots of great analytics technologies within the products for determining root cause, understanding performance deviations, and finding the major issues within applications, the broader analytics and ITOA strategies many are strategically invested in is something which Dynatrace’s messaging consists of integrations with providers like Geckoboard, Splunk, and others. This is clearly not what APM buyers are looking for today.

On the non-product side many of you are aware that private equity (PE) firm Thoma Bravo, who have an extensive background in the monitoring and management space (Network Instruments, Keynote, Infovista, and others) have decided to purchase Compuware and divide the company into several parts to drive growth. While many PE firms operate in other ways Thoma Bravo is a unique firm. One of the VPs Chip Virnig spoke a little about how  APM is a great market to place bets (which I agree with) – http://www.thomabravo.com/team/virnig/

Some recommended reading about PE firms (sorry clients only) : How to Re-evaluate Strategic Vendors Acquired by Private Equity

http://www.gartner.com/document/2840319

I spoke to several customers, and some were using end to end mobile APM. Interesting use cases, and capabilities within the offering which have matured nicely. Expect new research and presentations on mobile APM in the next 6 weeks at our upcoming Gartner Data Center Conference. Dynatrace 6 offers major improvements on scalability in terms of how many controllers are needed. Customers confirmed they needed far fewer controllers and management servers in the newer products, which is good to hear from end users. There is more planned in terms of scale and capabilities.

Only being able to attend one day of three, I got a lot of value out of this conference.

Disclosure : Compuware paid for travel and hotel for my attendance at this conference

2 Comments »

Category: Analytics APM IT Operations ITOA Mobile Monitoring     Tags:

Personal Disclosure

by Jonah Kowall  |  October 23, 2014  |  4 Comments

I’ve written quite a bit about how analysts work, and here are some of my personal favorite reads, but they are more focused on vendors versus end users or investors:

http://theleanmarketer.com/6-myths-about-industry-analysts-and-startups/

http://www.aneelism.com/blog/2013/11/11/technology-analyst-101-for-startups-introduction.html

I’ve shared my personal social media policy, along with other personal ways I do work.

This blog post is specific to attendance at vendor conferences, for transparency. Some of the conferences I attend I am provided media access and use my credentials as someone who covers and writes about a specific vendor or market. When there are vendor conferences they often pay for travel and hotel for the conference should I choose to attend. Similarly they can also hire Gartner analysts to speak at conferences, which requires a larger commitment of financial investment on their part.

In addition to attending the conference I also meet with other clients, prospects, and interesting people at conferences, so sometimes Gartner will pay for these conferences for research, or to support our business growth and success.

In my past blog posts, I have not been completely transparent around my conference attendance, and it hasn’t been due to wishing to avoid being honest, I just hadn’t really thought about it. When reading some of the other coverage in the press, I soon realized that in order to be fair, I needed to do the same thing.

Going forward you will see full disclosure of when conferences were paid for by a vendor, or when conference admissions were provided by another organization.

4 Comments »

Category: Trade Show     Tags: