This has really been a pet peeve of mine going back many years. As soon as I discovered the use of real user monitoring (RUM) data for performance measurements of actual users (probably around 2003) I was destined to eliminate all but the basic monitoring that was done synthetically. At a previous position I held we would spend hundreds of hours per week building and maintaining well over 2500 tests across hundreds of applications. This was useful from an SLA perspective, but we had less than 50 of those applications covered with our RUM solution. Now that the cost and complexity of RUM has come down quite a bit, there is really no reason to use synthetic monitoring extensively yet most Gartner clients are still using this method nearly 9 years later.
Many vendors will push these technologies today, since they lack RUM capabilities, and of course the buyers will listen to the vendor and buy these technologies. They realize this data is useful, but it’s really misuse of this data to build actual service level delivery which is shortsighted. During client inquiry I explain what is possible using RUM data, but I feel that I’m educating the masses. This research note explains why you should focus on RUM, and how to limit the use of synthetic transactions for calculating availability and not performance information.
Category: Uncategorized Tags: