This has really been a pet peeve of mine going back many years. As soon as I discovered the use of real user monitoring (RUM) data for performance measurements of actual users (probably around 2003) I was destined to eliminate all but the basic monitoring that was done synthetically. At a previous position I held we would spend hundreds of hours per week building and maintaining well over 2500 tests across hundreds of applications. This was useful from an SLA perspective, but we had less than 50 of those applications covered with our RUM solution. Now that the cost and complexity of RUM has come down quite a bit, there is really no reason to use synthetic monitoring extensively yet most Gartner clients are still using this method nearly 9 years later.
Many vendors will push these technologies today, since they lack RUM capabilities, and of course the buyers will listen to the vendor and buy these technologies. They realize this data is useful, but it’s really misuse of this data to build actual service level delivery which is shortsighted. During client inquiry I explain what is possible using RUM data, but I feel that I’m educating the masses. This research note explains why you should focus on RUM, and how to limit the use of synthetic transactions for calculating availability and not performance information.
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.