Monday, 2017-06-12

Workshop on Challenges in Performance Methods for Software Development (WOSP-C'16)

A workshop at the 2016 International Conference on Performance Engineering in Delft, the Netherlands.

Workshop date was March 12 2016. The conference dates were March 12 – March 18.

Contact: Anne Koziolek koziolek(at)kit(dot)edu

The website of this workshop has moved to https://wosp-c.spec.org

Workshop Outline

There are new challenges to product performance due to changes in software and in the development process. Faster development means less time for performance planning. The need for scalability and adaptability increases the pressure while introducing new sources of delay in the use of middleware. Model-driven engineering, component engineering and software development tools offer opportunities, but their exploitation requires effort and carries risks.

This second edition of WOSP-C will explore the challenges and seek to identify the most promising lines of attack on them, through a combination of research and experience papers, vision papers describing new initiatives and ideas, and discussion sessions. Papers describing new projects and approaches are particularly welcome. As implied by the title, the workshop focus is on methods usable anywhere across the life cycle, from requirements to design, testing and evolution of the product. The discussions will attempt to map the future of the field.

This workshop repeats some aspects of WOSP98, the initial Workshop on Software and Performance, which successfully identified the issues of that time. The acronym WOSP-C reflects this. There will be sessions which combine papers on research/experience/vision papers with substantial discussion on issues raised by the papers or the attendees. At least a third of the time will be devoted to discussion on identifying the key problems and the most fruitful lines of future research.

The workshop proceedings will be published by ACM as part of the ICPE 2016 proceedings in the ACM digital library.

Discussion Topics

These topics are deliberately broad:

  • New ideas, most of all…
  • Model-driven engineering concepts for all kinds of applications, from embedded to SOA
  • The business case for performance methods
  • Performance methods for tighter integration of development and operation (DevOps)
  • Easing the effort of applying performance methods
  • Adding performance issues to software development tools
  • Performance issues in architecture and component engineering
  • Challenges posed by rapid development methods
  • Maximizing the value (for design improvement) of performance measurements and tests
  • Methods for deriving and exploiting performance models of applications
  • Performance challenges and solutions for cloud-based systems and for migrating to the cloud.

Submissions and Dates

6-page papers in ACM format, describing research results, experience, visions or new initiatives may be submitted via Easychair at https://easychair.org/conferences/?conf=wospc2016.

  • Submit by: Nov 20 Dec 6th (extended) submission site is closed
  • Notification to authors:  Dec 16th.
  • Camera-ready copy: Jan 18th (firm). The list of accepted papers is available.

Templates for the ACM format are available here.

Program Committee and Organizing Committee

Anne Koziolek (Chair), Davide Arcelli, Alberto Avritzer, Andre Bondi, Steffen Becker, Raffaela Mirandola, Murray Woodside.

Workshop Discussion

Report on Joint Discussions in the

WOSP-C Performance Engineering Challenges and LT Large-Scale Testing Workshops

ICPE 2016, Delft, Mar 12, 2016

Summary assembled by Andre Bondi, Murray Woodside, Christian Voegele and Jack Jiang

(bondia(at)acm(dot)org, cmw(at)sce.carleton(dot)ca, voegele(at)fortiss(dot)org, zmjiang(at)cse.yorku(dot)ca)

In addition to discussing the presentations during the two workshops (see the program and the ACM digital library) there were four joint breakout groups discussing the following topics:

  1. The relationship between performance models and performance testing
  2. Collection of representative test data in distributed environments
  3. Performance engineering for dynamic architectures
  4. Performance testing, operating data and DevOps

Summaries of these discussions follow.

1. The Relationship between Performance Models and Performance Testing

In summary there were three main thoughts:

  1. How: QPMN, Simulation Models, Test/Production, ML/Regression Model, (How accurate must a performance model be?)
  2. Input from Test to Model: Derive Test Cases from Performance Models
  3. Value: Risk mitigation, Planning, compare actual vs. predicted, compare option/ variations

The discussion covered reasons for including models, modeling technology including Machine Learning models, relationships in both directions.

  1. ... the model can be calibrated from test data
  2. ... tests can be derived from the model.

An active final discussion arose about what is a good enough model, how a less detailed model may be better because quicker to make and easier to use. However

  1. ... it was agreed there are no well-accepted objective criteria to apply to a model, to evaluate it as “good enough”
  2. ... the criteria should be related to the use of the model, and might be connected as much to business value, as to technical criteria.
  3. ... one example was, the impact of a new feature or system on existing services
  4. ... Another example was, to make a model that evaluates the overall impact of a change, on a line of business (e.g. sales).

There was also a comment that the model-building process needs to be collaborative between different stakeholders (not just modeling specialists)

2. Collection of representative test data in distributed environments

  1. Challenges:
    1. i.      Anonymize test data from production often required due to privacy laws
    2. ii.      Data dependencies across multiple databases
    3. iii.      Data must be realistic (distribution of attributes, amount of data)
    4. iv.      Generation of data during runtime that supports the dataflow of the application is difficult
  2. Action items:
    1. i.      Automatic test data generation driven by performance requirement
    2. ii.      Feedback-driven selection and generation of test data by learning the impact of parameter inputs on the performance
    3. iii.      Automatic generation of load test data that fit together across multiple systems, especially identifying data dependencies across databases that are not directly connected

3. Performance Engineering for Dynamic Architectures

The topic was prompted by self-organizing systems that change rapidly at run-time, but we decided that it should include any system in which entities and relationships change while it is running (reliable and adaptive systems for instance). It needs identification, maybe a different name. some examples are

  1. mobile entities connected by proximity
  2. elastic applications
  3. reconfigurable manufacturing systems
  4. real-time control

The time-scale for change is a major property.

Action Items:

  1. A taxonomy of types of dynamic changes
  2. Identify suitable performance metrics (System metrics, adaptation metrics, reliability, availability and other metrics like energy)
  3. Identify the steps in adaptation (such as decision delays and adaptation delays) and the important attributes of their execution
  4. Identify representative case studies

4. Performance Testing, Operating Data and DevOps

This topic aims to discuss about the relationship of performance testing and operating data in the context of DevOps.

First, the group discussed about the meaning of DevOps and its strength.

  1. Rapid feedback about the software quality
  2. Ecosystems
  3. Lots of automation (e.g., in deployment, testing, monitoring, etc.)
  4. Use of repository of customer usage data to build workload profiles
  5. Challenges in conducting performance testing in DevOps:
    • New test cases, new testing techniques (e.g., which version to test, how long to test)

Then the group discusses about the use and the limitations of “Canary Deployment”:

  1. Cannot be applied to mission critical systems
  2. Depends on how much failures that companies can tolerate
  3. “A/B testing” style of performance testing
  4. How do we select the “canary”?
    • Survey the customers for alpha/beta
    • Select “equivalent” customers
    • Customer satisfaction levels, revenue, application type (safety critical)?

Finally, the group discussed about the challenges and opportunities in developing and maintaining tools to conduct performance testing in the context of DevOps:

  1. Leveraging the testing data collected during CI to help us select “suspicious” builds
  2. The need for regular load testing process to verify the selected problems
  3. Ecosystems (Develop, Deploy, CI, automation)
  4. Conceptual mismatch (Dev vs. Ops)
  5. Lots of automation (build, deploy, test,..)
  6. Repository of production usage data
  7. New testing techniques, New test cases (e.g. test reduction, # of versions , testing time)