In a quick poll we ran at my ADLM session at #gartneraadi I asked a simple question: what is your company’s primary requirements management tool. I only provided two options: MS Office or An RM Tool. The result: 55% answered that Office was the tool in use. Does your organization also rely on Office? If so why? What holds back the adoption of directed RM tools? I have several thoughts about this but would love to hear from you.
by Thomas Murphy | December 5, 2013 | Submit a Comment
by Thomas Murphy | December 4, 2013 | Submit a Comment
As we ramp up to update the MQ for Integrated Software Quality Suites, we are kicking off a survey about tool recognition and use. The survey is located at https://t.co/9NXS0lJVPO and asks a few demographic questions, asks about various categories of tools e.g. load testing, test management, etc. And a couple quick questions about results. You should find it fast to take and we do not collect any company or personal data. If you are involved in testing software, I hope you will consider spending 10 minutes taking the survey. I will share findings here.
Category: Uncategorized Tags:
by Thomas Murphy | November 30, 2013 | Submit a Comment
As much as I dislike Las Vegas, I am excited for the Gartner’s annual Dev conference and look forward to meeting with our clients. I am going to be trying out some new data collection techniques and will also spin up a new survey on testing tool use. This will include in session polling using a technique introduced to me by my HS freshman daughter (the best place to learn trends).
We are working to firm up our research agenda and are working on the annual agenda overview notes. The testing tools MQ is underway, we’ve added additional QA and ADLM coverage in Hong Kong and I look forward to helping IT orgs make challenging transitions.
When I can I will be tweeting @metamurph from the event. Hope to see you there and have you all engaged in dialog. [Read more →]
by Thomas Murphy | November 13, 2013 | 1 Comment
Microsoft launched its 2013 suite of development tools today and it made me think over the evolution of development environments, from tools focused on making individuals productive to tools that enable teams to work together effectively. Now as we move into a world focused on devices and services, Microsoft has advanced the art of the IDE by playing a common theme: blend the development world with the target environment. This means creating tools that not only make it easy to target a wide variety of clients but also that pull the cloud directly into the development experience.
The cloud has been part of the experience since VS2012 with the ability to utilize TFS running on Azure and including Azure credits in MSDN. Microsoft has continued to expand this offering and continues to add services to the Visual Studio Azure experience. Microsoft announce preview support for a new service called AppInsights which provides a variety of correlative data on how your services and applications are being accessed, performance, etc. They also announced project Monaco which is a light weight IDE running in a browser primarily targeted at editing Azure web apps. They have also integrated SignalR into Azure and leverage in the development tools. This enables a great edit/update experience while debugging and prototyping applications and aids in the ability to test applications across a variety of browsers.
As you run through the scenarios that VS and TFS enable you can see continued steps toward developers being able to work as effective teams not in a traditional world of isolation and a compile/thrash/debug loop but a continuous collaborative experience. This also includes support for improved management level views of projects with “roll-up” reports across the portfolio as well as integration of the recently acquired InRelease release management assets.
Lab management continues to be a common question in my calls, not everything is here, but the 2013 release is making a nice step for those who are targetting Azure cloud apps. Microsoft has always been about making it easy to deliver applications to their platform and while they are certainly battling hard to drive adoption of Windows 8.1 and Windows Phone, you can see that maybe the biggest prize is not the client but the services.
If you have investments in Microsoft development, this is a solid upgrade and Microsoft continues to add good value to MSDN. However, users need to make sure they make use of these benefits.
by Thomas Murphy | November 6, 2013 | Submit a Comment
Recently we have seen a spate of highly visible application roll out failures. Because of the exposure it seems like these may be “isolated” incidents but I feel they are more normal in large system rollouts then we want to admit. The core challenge is that generally the business does’t value QA or understand the “ROI” — “testing just increases the time and cost of delivery”. This manifests as a constant drive to reduce the cost of testing and if possible avoid certain elements all together. This persists until somethign blows up or regulatory compliance creates an outside driver. In general the focus of a company with respect to Quality the focus is on Risk Management (thus risk-based testing methods) rather than Quality as defined by ISO 9126 : Functionality (which includes Security), Reliability, Usability, Efficiency, Maintainability, Portability.
Quality is important not just from the perspective of “does the application work” but also from the perspective of how will the application impact our maintenance costs? However, it is very difficult to turn this into an ROI and driving quality can’t just mean triple the testing budget. A key element is driving automation and getting consistent and continuous.
Even if you aren’t “agile” use continuous integration, it drives consistency. Your CI system should run a consistent set of tests including unit, static analysis, performance, and functional automation. Static analysis tools provide the ability to detect errors but also to help drive a view of ISO 9126 “conformance”. Optimyth’s Kiuwan is a great example and you can see on their site what 230 open source projects look like. I like the way they not only measure quality (5 different components – no one really does 9126) but also provide an estimate of the cost to hit target levels. Sonar (open source) and the associated commercial offering from SonarSource are becoming increasingly popular due to broad language support and integration to popular CI systems. There are many compentent static analysis tools to choose from — the key is to make sure you get one, put it to use, build a dashboard and make it visible.
Functional automation has been a constant challenge for companies. However pace of deliver, the need to support multiple browsers, devices, geographies will either blow testing costs or be met by automation. I suggest looking at Google’s Test Blog and their TechTalk’s youtube channel. Search for GTAC (Google Test Automation Conference) to find a solid set of presentation like this year’s day one keynote: How Facebook Tests Facebook on Android. If your automation tools and skill set don’t enable you to drive a high degree of automation, you must invest.
Load and performance testing can’t just happen in a box at the end of the life-cycle. There should be a level of performance tests that run every day. Performance testing must also go beyond meeting some pre-defined user load. Unless you are absolutely sure of the load (and guess what it is a new function, release…people will do things differently than in the “last version”) you have to test to learn. Test beyond expectation, run failure scenarios (ps, Netflix is another great example to model). Use Cloud-based load testing that enables you to scale tests to web-scale loads, utilizes real browsers, and enables multiple points of presence. In other words, make your testing as real as possible.
Build A Production-like Lab
Automate the lab and build one that as closely as possible mimics production. This may mean the use of virtualization (service, database, network in addition to the lab itself) it also requires, yes, automation. Use the same ARA tools to release into the test lab that are used in production (ie test release everyday too). Automate the production of test date either via data generation (ex. Grid-tools VTF) or automated subsetting and masking such as IBM Optim, Informatica Data Subset/Mask, or Grid-Tools.
Don’t become the next headline, quality matters and if your company wants to move to an agile, continuous delivery world, it is going to need to invest in quality and automation.
by Thomas Murphy | September 19, 2013 | 6 Comments
A theme that seems to be popping up in my conversations with individuals participating in “agile” projects is that it is begining to feal like you are in a constant “death march”. Business continues to press for “more productivity” and has to respond to a constantly changing and competitive marketplace. In traditional waterfall projects many are used to the last couple months becoming the death march. Long hours, lots of stress, a changing triage bar for defects, all hands on deck as you surge to get the product done “on time”. This may mean you have 10 months of a “normal” life and then two months of hell.
But as companies move toward Agile, every two weeks may be a new release. That is a possibility for 26 sprints per year. If two of the working days of your 10 day sprint are a death march, that is 52 days per year vs. 40 days in the “annual” program. A more than 25% increase in death march days. The Agile Manifesto states:
“…Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely…”
The question is what is sustainable. I am hearing stories that don’t sound like only the last 2 days of a sprint are a death march. Every day feels like a death march. This isn’t a new topic, I have included links below to some posts on the topic. Organizations, actually teams, need to determine what is sustainable for the team. WIP limits need to be understood. A freeway filled to 100% capacity is a parking lot. Don’t let a shift to agile mean a shift to constant running. Global business and mobile devices only make this a more challenging battle.
Is your organization marching up hill or has it found discipline to not only be sustainable but found that you are more productive not just in time to value but driving customer sat and improving quality? What did it take for your team to succeed?
Category: Uncategorized Tags:
by Thomas Murphy | August 6, 2013 | Submit a Comment
Rally Software today announced the publication of a whitepaper The Impact of Agile Quantified that attempts to describe the direct benefits of moving to agile practices. This will be followed with additional data in the future and is based on the analysis of user data. We applaud this effort for its scope and the general need of business users to put a “bottom line” on what will Agile do for me. The data helps drive out best practices such as team size and WIP limits and should be useful as the use of Agile continues to expand in organizations both in volume and in the number of roles involved.
Overall the state of Agile is relatively well and data like this helps strengthen the case and illustrates the strength of Rally’s market position.
Category: Uncategorized Tags:
by Thomas Murphy | May 15, 2013 | Submit a Comment
We’ve begun work on the update to the Magic Quadrant for ALM. We are subtly shifting our terminology for the market from Application Lifecycle Management to Application Development Lifecycle Management. We feel this is a more accurate depiction of what the tools in this space are focused on.
Participants this year all have at least 200 active customer installations, $5M per year in revenue. These vendors are also actively brought up by our clients on calls and thus have created some recognition at the enterprise level. There are many other products in the market that are effective for specific roles or at the project level, our goal is to look at tools that are effective at scale and most of the vendors being covered have installations with 1000 or more users.
We will be using the same criteria as the prior edition of the MQ. We will however, have an additional use case for product development which supports the 2011 Maverick research by Matt Hotle on the shift from Projects to Products.
We will also update several surrounding documents and are working on pieces around the market sub-segments.
Category: Uncategorized Tags:
by Thomas Murphy | April 23, 2013 | 1 Comment
One of the challenges of requirements management is are we capturing the right information. This has lead to a number of approaches often backed by a plethora of tools: models, prototypes and wireframes, user stories and use cases, formal specifications, and more have graced the marketplace. The challenge is that in some cases we may not know what is “required”. This is especially true when it comes to innovation. We feel the pressure to just deliver but what exactly are we delivering? This is the “what is the right it?” problem. Many organizations are looking at Lean Development approaches but how do you decide to embark on a project in the first place? Or, do we actually know what problem we are solving?
A crowd source “problem solving company” published in the Sept 2012 issue of HBR on the topic of are you solving the right problem and while, like many consultant/provider produced articles there is a bit of fluff, the base concept is good to think on. It opens with an oft stated “quote” of Einstein (that no one seems to have the citation for)
“If I were given one hour to save the planet, I would spend 59 minutes defining the problem and one minute resolving it,” Albert Einstein said.
Good strides are being made in improved requirements tools, better collaboration practices, and agile/lean provides good tools for testing a premise; however, the better you understand the problem you are solving, the less likely you will be shooting arrows in the dark and if you can keep the initial problem definition unhindered from designing a solution, you may really get to that breakthrough that you are looking for.
Category: Requirements Tags:
by Thomas Murphy | April 22, 2013 | Submit a Comment
The Nexus Forces are driving a lot of changes for IT teams but they are also going to drive a lot of disruption for vendors. We will see this as companies work to reposition themselves to “hot terms” and through a burst of acquisitions. We have already seen a number of acquisition as companies are grabbing onto various bits of the DevOps pipeline (most recent IBM acquires UrbanCode) and in the testing market are seeing companies broaden portfolios (most recent TestPlant acquires Facilita) or extend in areas such as APM. I expect that mobile will be a key area of acquisition during this year. Users should thus recognize in these areas that sometimes partners may become competitors (disrupting integrations) as well as competitors becoming a single company, product overlaps will occur, and tools that respond to Nexus Forces will often be tactical purchases. In some cases new players may disrupt the existing market makers but most of the truly innovative ones will be acquired by those that have the cash and vision to make it happen.