Be a Performance Testing Early Bird

This article will show you how starting performance testing earlier—with minimal effort—can reduce schedule pressures and increase software quality specifically related to performance.

A significant challenge, which negatively affects an organization’s ability to complete a reasonable level of performance testing, is schedule pressure. Performance testing, specifically load testing, demands a measure of application stability that often cannot be reached until later in the development lifecycle. Overruns in prior stages – development, integration, and test – contribute to schedule slips that ultimately impact the ability of an organization to both effectively test and deliver on time while staying within budget. So, Start Earlier!

6 Approaches to Testing Early and Reducing Schedule Pressure

Here are 6 approaches and reasons to start performance testing. They may seem simple but how many do you do?

  1. Passive Monitoring of Ongoing Testing
    Testing inputs exist in the system well before the formal performance test phase, so why aren’t you leveraging them to start characterizing the performance of your application? Your organization may or may not be doing formal performance testing, but almost certainly some level of testing is happening. This can serve as system input to start performance characterization.

    Passively monitoring ongoing unit, integration, and system test cases is an excellent approach to begin validating application architectures and finding performance related low-hanging fruit. You may be asking yourself: What can be gained from early monitoring? Let me further explain.
  2. Validate Application Architectures Now, Not Later
    Application architectures should be validated immediately before they become too costly to correct if needed. Application enhancements, including significant new releases, regularly inherit existing architectures that were designed for a different purpose. Validating that legacy architectures continue to support the demands of new functionality must be done up-front. If you find a problem, your customer and budget will thank you for finding it early.
  3. Low-Hanging Fruit Pays Dividends
    The complexity of a defect is independent of its impact on schedule. A missing semi-colon can break a software build, and a simple defect can stop a test team in its tracks. Testing for and resolving performance low-hanging fruit in the development and integration cycles translates to higher quality deliveries to your test teams. If you take the time to implement these extra steps up-front, this will enable higher throughput in test case execution and test case completion.

    Keep in mind we coined the term “blocking bugs” for a reason: they block progress. Every low-hanging fruit performance defect you address in the development stage translates directly to increased productivity and throughput on your back-end test functions.
  4. How do I Start – (hint: tools are your friend)
    Tools enable you to do more with less, period. You wouldn’t build a house without a hammer and nails; in the same way, you wouldn’t start your performance investigations without appropriate tools, would you? There are lots of arguments regarding free tools –some people incorrectly think free = open-source – and commercial tools, but at the end of the day whether you rely on free / open-source / commercial, you need a toolkit.

    Your toolkit is what enables you to passively monitor and characterize performance without the support of development, which is busy worrying about building functionality. Tools enable you to peer inside an otherwise black box; you will learn a lot by just listening. So at a minimum, what tools are applicable?
    • Application Profilers - For general purpose application, performance monitoring profiling will show you performance improvement candidates.
    • Memory Monitoring - Depending on the formality of the environment, it may make sense to periodically monitor and profile application memory utilization. The monitoring you put in place now may point you to specific conditions like logging problems, which are triggered by attempts to log large objects.
    • Resource Utilization – The classics: CPU, disk, network, memory. Not as interesting in the early stages, but put your processes in place to collect and interpret this data from the start.

Of course, I have to mention that RTI – – OC Systems’ lightweight deep performance diagnostics tool – is an excellent tool for gaining deep visibility and performance metrics into your complex web application.

For additional thoughts on a variety of specific tools, take a look at Build Your Toolkit – the right tools for the right job in the OC Systems’ Blog –

  1. Where do I Start?
    When you look to define an environment, or a milestone, in order to start performance characterization early in the application lifecycle, use the following criteria:
    • CM Control – Is some level of CM control in place? Is the application relatively stable with scheduled code drops that you can plan for and around?
    • Existing Ongoing Testing – Unit, Integration, or Development test cases can be leveraged as inputs for performance monitoring. Although we always have the option of doing more, we start by leveraging what’s already in place. Reduce, reuse, recycle – sound familiar? Knowing how to leverage existing work is a major aspect of doing more with less.

We typically find these characteristics in integration environments. Also of note: historically, development is simply too volatile to gain meaningful metrics.

  1. Getting Fancy with Continuous Integration (bonus points)
    When working in a continuous integration or Agile development environment, you have the opportunity to get fancy. Using your toolkit in your defined test environment, you should begin by base-lining your application performance by drop (or release). Look specifically to your profiling information and compare method level timings across drops. Periodic reporting and monitoring allow you to identify performance defects and trace them back to the code that was introduced in a specific drop or build.

    If and when you can reliably identify performance degradations at the source they are introduced, this is cause to celebrate and promote your capabilities! Don’t be afraid to highlight the value of testing for performance. A stronger case is always made by providing specific objective data points.


By starting performance testing and monitoring earlier in the application lifecycle, you will positively impact program schedule while increasing the opportunity for a program to deliver higher quality software delivered on-time and within budget. Performance testers can begin application performance characterization by leveraging ongoing testing and tools to help validate application architecture and resolve low-hanging performance defects prior to formal performance testing.

Steve Sturtevant is a consultant working on the Automated Commercial Environment (ACE) program at U.S. Customs & Border Protection where his focus has been providing performance engineering and testing services across the software lifecycle. He is employed as a Senior Software & Performance Engineer by OC Systems, as well as Product Manager for the OC Systems RTI product which delivers software performance diagnostics for J2EE systems. Prior to ACE, Steve worked as a performance engineer for several large DoD programs as well as a developer working on real-time systems.

Steve will present session 902: 5 Ways to Do More Performance Testing in Less Time along with James Pulley CTO of Newcoe Performance Engineering at the Software Test Professionals Conference Fall 2011, October 24-27 in Dallas, Texas.