Jonah Kowall

A member of the Gartner Blog Network

Jonah Kowall
Research Vice President
3.5 years with Gartner
20 years IT industry

Jonah Kowall is a research Vice President in Gartner's IT Operations Research group. He focuses on application performance monitoring (APM), Unified Monitoring, Network Performance Monitoring and Diagnostics (NPMD), Infrastructure Performance Monitoring (IPM), IT Operations Analytics (ITOA), and general application and infrastructure availability and performance monitoring technologies. Read Full Bio

Coverage Areas:

Velocity Conference 2014 Wrap-up

by Jonah Kowall  |  June 30, 2014  |  3 Comments

Always one of the more enjoyable conferences for me to attend, I don’t get worked as hard as Gartner conferences which are also really enjoyable, but I spend time doing the educating versus listening to other smart people. Velocity is a practitioner focused conference and is very geeky (in a good way for those of us who are pretty deep technologists). I’ll highlight some of the great sessions I attended and other technologies I discovered.

The conference is put in my a competitor of course, since we do our own events, but they had over 2,400 registered attendees and over 100 sponsors. There seems to be growth here, and the conference is always larger. Here are some session bullets I found interesting. You’ll notice a pretty wide spread from performance of the front-end, application middleware, and backbends.

Webpagetest deep dive –

This is a great open source tool for measuring and diagnosing front-end performance. I’ve used the tool, but had been mostly ignoring it since it wasn’t evolving too much. That was quite a mistake since it’s evolved considerably since I’d last really used it.

  • Good to dig into the new features in the advanced settings tab
  • Run more than one test when measuring, always
  • Very cool advanced visual comparison
  • Filmstrip view has been improved
  • Can do mobile runs, which show it in a mobile browser (very cool)
  • Browser CPU usage stats can be overlaid on waterfall
  • Can export tcpdump (use in wireshark or cloudshark)

Docker –

Content was good for those who hadn’t used docker. I’ve done some basic work on it, and find it interesting, but also quite basic in nature. Some of the discussion hit on issues around security, support for other containers, and overall limitations in this immature, but evolving technology.

  • The room was packed.
  • Dockerfile instructions (kind of like a init.d script), I hadn’t used these before, but they are critical when using docker at scale.

RUM Comparison and Use Cases –

The team at SOASTA presented a non-vendor biased view of RUM. While I found the landscape they laid out basic, and partially incomplete, but still a valiant effort by the team there. The key takeaway is more users are trying to tie business metrics to RUM data, for example e-commerce companies tying and analyzing revenue to users and performance.

Google – Jeffrey Dean (

Interesting discussion by Google’s Jeffrey Dean, the most interesting part I found was his analysis of data replication to extra nodes to reduce latency, and of course the multiple-write technologies many use to deal with that replication closer to the source of the data

Keynote systems –

Ben investigated what page load times look like, some of the interesting data he presented was what fast was varied by country and other demographic data. He also used the video capture features of webpagetest.

Speedcurve – – Blog and Video of the Keynote –

This was one company I hadn’t heard of (well more like a 1 man show), interesting company which does a nice frontend and comparative analysis using a webpagetest backend. Some notes:

  • Sits on top of webpage test
  • Competitive benchmarking, runs once a day, multiple runs
  • Complements RUM
  • Shows filmstrips
  • Formats the data much better
  • Helps find savings, etc
  • Can get to webpagetest views as well
  • Showed some interesting research on visualizing data


Understanding Slowness – :

Always a highlight of Velocity for me, Theo is a unique and extremely bright individual. He always brings good analysis and practical content, he’s an ops guy through and through. There is no marketing or other fluff you often see with content at conferences. Some high level notes:

  • Document your architectures
  • Have a plan
  • Use redundant vendors, don’t put your eggs in one basket (easier said than done, but for some things a good idea)
  • Measure latency (performance
  • Quantiles over histograms
  • Observation – takes state, watches
    • Dtruce, truss, tcpdump, snoop, sar, iostat, etc
    • Synthesis – Run a test to enable diagnostics (replicate an issue)
      • Curl
    • Manipulation – test hypothesis
      • Vi/echo
      • Sysctl/mdb
      • Dtrace

Some Simple Math to get Some Signal out of Your Ops Data Noise – –

Not sure I’d call this simple math at all, but here is a very new company we awarded a Cool Vendor this year for APM and ITOA who focuses on ITOA use cases with their solution. They have a lot of growing up to do as a company, but they have some compelling analytics technologies. Mr Boubez applies and brings the readers through a journey of math, what we’ve tried (which doesn’t work too well) and some techniques which do work much better. Clearly worth a look.

  • Gaussians don’t work with data center data
  • Use histograms (even though Theo says they may not be the best visual analysis tool)
  • Kolmogorov-Smirnov test allows for better data
    • Handles periodicity in the data
  • Box Plots / Tukey
    • Doesn’t rely on mean and stddev
    • IQR moving windows –

Early phase tool for running rules against frontend optimization, which is a cool idea. I’m going to wait for lab time until version 3 written in node.js comes out in 3 weeks


Category: APM Monitoring Trade Show     Tags:

3 responses so far ↓

Leave a Comment