Skip Ribbon Commands
Skip to main content

Performance Testing: The Earlier the Better for Successful System Implementations

27/11/2013


Performance testing of software is often a neglected component of application development and system replacement projects. It is frequently ignored or relegated to the end of a project for a period of ‘performance tuning activities’ in favour of functional development. When a typical project starts to go sideways and targets are pushed, time allocated for quality assurance (QA) and fine-tuning are generally the things that get cut.

Why does this legacy of ‘waterfall’ planning continue to exist in an agile world, where incremental development and adaptability are founding principles?

The Risks of Delaying Performance Testing

Performance testing when a system is stable at the end of a project means you’ll likely only have time to build out the environment to meet initial demand. Specific, large-scale performance problems due to poor code or inefficient architecture likely can’t be resolved unless go-live is delayed.

Leaving performance testing to the end of the project may put you at risk for sluggish performance and intolerable wait times. This can lead to dissatisfied users and cycles of emergency patches to improve performance.

Effective Performance Testing Starts Early

Performance testing can be accomplished in an agile project by incorporating it as part of the process and prioritizing it appropriately.

Effective performance testing should be planned for and included from the inception of a project and made part of a continuous cycle of QA. As part of its QA acceptance, every story must include a set of performance metrics it must meet before a story can be marked as complete.

A standard story card goes something like: “As an x I want y so that z.” It is generally further defined by acceptance criteria such as:“x is working” / “x cannot do y without doing z” / “x is stored in y for use elsewhere”.

Key components of acceptance criteria:

  • Gets the team to think through how a feature or piece of functionality will work from the user’s perspective
  • Removes ambiguity from requirements
  • Forms the tests that will confirm if a feature or piece of functionality is working and complete

But acceptance criteria generally define functional acceptance only. You will rarely see performance criteria included in a requirement like this: “x must return results in less than y seconds when the server is under z load 19 times out of 20 and less than u seconds when the server is under w load 18 times out of 20.”

A good performance testing plan will define:

  • The performance criteria the system is required to meet
  • An explanation of how the performance criteria will be measured and how they align with business objectives
  • Remediation steps to explain how failures will be prioritized, handled and resolved

A team with a solid grasp of the importance of system performance can incorporate performance testing tasks and remediation into an agile project by defining performance objectives, systematically evaluating the system and defining failures as new stories to deliver.

A common issue with many development teams is a shortage of resources with the depth of experience to conduct effective and efficient performance testing. One option to consider is hiring a team with the knowledge and experience to analyze, manage, provide training on and implement a performance testing plan in a cost-effective manner.

MNP’s performance testing team is experienced in conducting thorough performance analyses that integrate seamlessly within the agile process on projects both large and small.