Interpreting and Reporting Performance Test Results

You’ve worked hard to define, develop and execute a performance test on a new application to determine its behavior under load. You have barrels full of numbers. What’s next? The answer is definitely not to generate and send a canned report from your testing tool.  Results interpretation and reporting is where a performance tester earns the real stripes.

In the first half of this workshop we’ll start by looking at some results from actual projects and together puzzle out the essential message in each.  This will be a highly interactive session where we will display a graph, provide a little context, and ask “what do you see here?”  We will form hypotheses, draw tentative conclusions, determine what further information we need to confirm them, and identify key target graphs that give us the best insight on system performance and bottlenecks.

Please bring your own sample results (on a thumb drive so that we can load and display) and we’ll interpret them together!

In the second half of this session, we’ll try to codify the analytic steps we went through in the first session, and consider a CAVIAR approach for collecting and evaluating test results:

  •         Collecting
  •         Aggregating
  •         Visualizing
  •         Interpreting
  •         Analyzing
  •         Reporting

Performance Workshop
Location: Fairbanks Terrace D Date: March 31, 2015 Time: 1:00 pm - 5:00 pm Dan Downing Eric Proegler