Always one of the more enjoyable conferences for me to attend, I don’t get worked as hard as Gartner conferences which are also really enjoyable, but I spend time doing the educating versus listening to other smart people. Velocity is a practitioner focused conference and is very geeky (in a good way for those of us who are pretty deep technologists). I’ll highlight some of the great sessions I attended and other technologies I discovered.
The conference is put in my a competitor of course, since we do our own events, but they had over 2,400 registered attendees and over 100 sponsors. There seems to be growth here, and the conference is always larger. Here are some session bullets I found interesting. You’ll notice a pretty wide spread from performance of the front-end, application middleware, and backbends.
This is a great open source tool for measuring and diagnosing front-end performance. I’ve used the tool, but had been mostly ignoring it since it wasn’t evolving too much. That was quite a mistake since it’s evolved considerably since I’d last really used it.
- Good to dig into the new features in the advanced settings tab
- Run more than one test when measuring, always
- Very cool advanced visual comparison
- Filmstrip view has been improved
- Can do mobile runs, which show it in a mobile browser (very cool)
- Browser CPU usage stats can be overlaid on waterfall
- Can export tcpdump (use in wireshark or cloudshark)
Docker – https://twitter.com/kartar
Content was good for those who hadn’t used docker. I’ve done some basic work on it, and find it interesting, but also quite basic in nature. Some of the discussion hit on issues around security, support for other containers, and overall limitations in this immature, but evolving technology.
- The room was packed.
- Dockerfile instructions (kind of like a init.d script), I hadn’t used these before, but they are critical when using docker at scale.
The team at SOASTA presented a non-vendor biased view of RUM. While I found the landscape they laid out basic, and partially incomplete, but still a valiant effort by the team there. The key takeaway is more users are trying to tie business metrics to RUM data, for example e-commerce companies tying and analyzing revenue to users and performance.
Google – Jeffrey Dean (http://research.google.com/pubs/jeff.html)
Interesting discussion by Google’s Jeffrey Dean, the most interesting part I found was his analysis of data replication to extra nodes to reduce latency, and of course the multiple-write technologies many use to deal with that replication closer to the source of the data
Keynote systems – https://twitter.com/keynotesystems
Ben investigated what page load times look like, some of the interesting data he presented was what fast was varied by country and other demographic data. He also used the video capture features of webpagetest.
Speedcurve – https://twitter.com/MarkZeman – Blog and Video of the Keynote – http://speedcurve.com/blog/velocity-responsive-in-the-wild/
This was one company I hadn’t heard of (well more like a 1 man show), interesting company which does a nice frontend and comparative analysis using a webpagetest backend. Some notes:
- Sits on top of webpage test
- Competitive benchmarking, runs once a day, multiple runs
- Complements RUM
- Shows filmstrips
- Formats the data much better
- Helps find savings, etc
- Can get to webpagetest views as well
- Showed some interesting research on visualizing data
Understanding Slowness – http://www.twitter.com/postwait : https://speakerdeck.com/postwait/understanding-slowness
Always a highlight of Velocity for me, Theo is a unique and extremely bright individual. He always brings good analysis and practical content, he’s an ops guy through and through. There is no marketing or other fluff you often see with content at conferences. Some high level notes:
- Document your architectures
- Have a plan
- Use redundant vendors, don’t put your eggs in one basket (easier said than done, but for some things a good idea)
- Measure latency (performance
- Quantiles over histograms
- Observation – takes state, watches
- Dtruce, truss, tcpdump, snoop, sar, iostat, etc
- Synthesis – Run a test to enable diagnostics (replicate an issue)
- Manipulation – test hypothesis
Some Simple Math to get Some Signal out of Your Ops Data Noise – https://twitter.com/tboubez – http://www.slideshare.net/tboubez/simple-math-for-anomaly-detection-toufic-boubez-metafor-software-velocity-santa-clara-20140625
Not sure I’d call this simple math at all, but here is a very new company we awarded a Cool Vendor this year for APM and ITOA who focuses on ITOA use cases with their solution. They have a lot of growing up to do as a company, but they have some compelling analytics technologies. Mr Boubez applies and brings the readers through a journey of math, what we’ve tried (which doesn’t work too well) and some techniques which do work much better. Clearly worth a look.
- Gaussians don’t work with data center data
- Use histograms (even though Theo says they may not be the best visual analysis tool)
- Kolmogorov-Smirnov test allows for better data
- Handles periodicity in the data
- Box Plots / Tukey
- Doesn’t rely on mean and stddev
- IQR moving windows
Sitespeed.io – https://twitter.com/soulislove
Early phase tool for running rules against frontend optimization, which is a cool idea. I’m going to wait for lab time until version 3 written in node.js comes out in 3 weeks
Read Complimentary Relevant Research
Predicts 2017: Artificial Intelligence
Artificial intelligence is changing the way in which organizations innovate and communicate their processes, products and services. Practical...
View Relevant Webinars
How to Live Without Mobile Device Management
This webinar addresses the growing trend of users refusing to have enterprise management of their mobile devices due to privacy concerns....
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.