Project downloads, job postings, Github commits and mailing list activity are common methods to gauge traction for commercial and open source software projects. I frequently use those same metrics as input when looking at emerging projects, and for good reason.
Increasing traffic on a user mailing list indicates growing use and adoption. However, if that traffic continues growing, especially after the milestone release, it can indicate poor documentation or quality control. Do the same questions keep coming up on user forums? Your documentation is poor or nonexistent. Did the bug count go up after the 1.0 release? Looks like you’re focusing on features over quality.
Metrics like these are useful but they don’t exist in a vacuum. They need context, the type provided by the overall market, vendor strategies and customer feedback. A simple measure of popularity lacks this context, and therefore lacks insight. Additionally, popularity doesn’t measure sentiment. You don’t know if someone on Stackoverflow is saying “Database X is awesome,” or “Database Y is awful.”
Also concerning is the self-fulfilling notion of popularity. Once a database is measured as relatively more popular than another, it will experience a further uptick in popularity metrics and the cycle continues. Ultimately this results in more marketplace hype and confusion.
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.