Date and Time: April 11, 2016
09:10 - 10:10
Keynote 1: Transferring Software Testing and Analytics Tools to PracticeTao Xie (University of Illinois at Urbana-Champaign, USA)
Abstract: Producing industry impacts such as tool adoption in industry has been an important, yet challenging task for the research community. Although there are a few cases on reporting "what to do" (reflecting good things to keep doing and replicate elsewhere), there are still strong needs of reporting more successful technology-transfer cases and stories behind them for the research community to learn from and replicate. This talk presents experiences and lessons learned in successfully transferring software testing and analytics tools to industry such as Microsoft Research Pex (http://research.microsoft.com/pex) being shipped with Visual Studio 2015 as IntelliTest and various Microsoft Research software analytics tools (http://research.microsoft.com/sa/) either being shipped with Visual Studio or widely adopted by engineers inside Microsoft.
Bio: Tao Xie is an Associate Professor and Willett Faculty Scholar in the Department of Computer Science at the University of Illinois at Urbana-Champaign, USA. He has worked as a visiting researcher at Microsoft Research. His research interests are in software engineering, focusing on software testing, program analysis, software analytics, software security, and educational software engineering. He was an ACM Distinguished Speaker and is an IEEE Computer Society Distinguished Visitor. He is an ACM Distinguished Scientist. His homepage is at http://taoxie.cs.illinois.edu.
10:10 - 10:30
Paper Session 1
Árpád Beszédes and László VidácsAcademic and Industrial Software Testing Conferences: Survey and Synergies
Just as with any other profession, an efficient way to exchange ideas and networking in software testing are conferences, workshops and similar events. This is true for both professional testers and researchers working in the testing area. However, these two groups usually look for different kinds of events: a tester likes to attend "industrial" (sometimes called practitioner’s or user) conferences, whereas a researcher is more likely interested in "academic" (in other words scientific or research) conferences. Although there are notable exceptions, this separation is substantial, which hinders a successful academy-industry collaboration, and communication about the demand and supply of research in software testing. This paper reviews 99 conferences: two thirds are academic ones, the rest being industrial. Besides providing this reasonably comprehensive list, we analyze any visible synergies such as events that have a mixed Program Committee and offer a program with elements from both sides. We found only a handful of such events, but these can serve both as opportunities for attendees who wish to extend their perspectives and as models for organizers of future conferences.
10:30 - 11:00
11:00 - 11:45
Paper Session 2
Nikolay Tcholtchev, Martin Schneider and Ina Schieferdecker
Systematic Analysis of Practical Issues in Test Automation for Communication based Systems
This paper is about issues experienced along testing large scale industrial products, with safety and security critical relevance. The challenges in testing - several thousand requirements for several product variants and various configurations - were addressed by test execution automation. However, since principal testing concepts as well as architectural concepts were ignored or poorly implemented, the test automation activities faced various difficulties within the considered projects. The current paper presents these issues in an abstracted manner and discusses possible solutions.
Emil Alégroth, Marcello Steiner and Antonio MartiniExploring the Presence of Technical Debt in Industrial GUI-based Testware: A Case Study
Technical debt (TD) is a concept used to describe a sub-optimal solution of a software artifact that negatively affects its comprehensibility, extendability and maintainability. As such, TD adversely affects the costs or quality associated with the artifact, which is also called interest. TD has through research been identified in all types of software artifacts, from architectural design to automated tests (Testware). However, research into testware technical debt (TTD) is limited and primarily focused on testing on lower level of system abstraction, i.e. unit- and integration tests, leaving a need for more TTD research on GUI-based testing. In this study we explore this gap in knowledge through an industrial case study at a Swedish avionics software development company. Four repositories are studied for the presence of TTD using expert interviews, semi-automated document analysis and automatic metric analysis. Results of the study provide initial support that the concept of TTD is applicable to GUI-based testware and show the presence of both TD items unique to GUI-based testware and items common to software. The implications of these results are that engineering best practices must be established for GUI-based testware to minimize TD interest.
11:45 - 12:30
Paper Session 3
Ceren Şahin Gebizli, Hasan Sozer and Ali Ozer ErcanSuccessive Refinement of Models for Model-Based Testing to Increase System Test Effectiveness
Model-based testing is used for automatically generating test cases based on models of the system under test. The effectiveness of system tests depends on the contents of these models. Therefore, we introduce a novel three-step model refinement approach. We represent system models in the form of Markov Chains. First, we update state transition probabilities in these models based on usage profile. Second, we update the resulting models based on fault likelihood that is estimated with a static analysis of the source code. Third, we update these models based on error likelihood that is estimated with dynamic analysis. We generate and execute test cases after each refinement step. We applied our approach in the context of an industrial case study for model-based testing of a Smart TV system. We observed promising results, in which new faults were revealed after each refinement.
Andreas Hoffmann, Jochen Quante and Matthias Woehrle
Experience Report: White Box Test Case Generation for Automotive Embedded Software
Testing software based on symbolic execution is a promising approach. Recently, various dynamic symbolic execution approaches have been proposed that are of interest for industrial practice. However, the corresponding tools are typically not directly usable on model-based languages that are predominantly used in the automotive sector. To this end, we use an internal software analysis framework for model-based, embedded software that allows us to easily integrate various approaches leveraging static and dynamic symbolic execution. In our case study, we describe idiosyncrasies of automotive embedded software and show the benefits of leveraging symbolic information to generate test cases. We study and compare different techniques for generating test cases. We evaluate all techniques empirically on relevant, real-world programs from engine control.
12:30 - 13:30
13:30 - 14:15
Open Session - New Ideas, Experiences and Challenges
Adithya Nagarajan and Ajay VaddadiAutomated Fault-Tolerance Testing
Software Fault Tolerance is the ability of computer software to continue its normal operation despite the presence of system or hardware faults. Most companies are moving towards Microservices architecture where complex applications are developed with a suite of small services, each communicate using some common protocols like HTTP. While this architecture enables agility in software development and go-to market, it poses many challenges. At Groupon, we have over a hundred of microservices running on thousands of machines that interact with each other to provide overall user experience on Groupon web and mobile applications. With these many moving parts, one of the critical challenges is to assess fault tolerance and resiliency of the overall system. A failure in one of the dependent services can cause an unexpected impact to the upstream services causing severe customer facing issues. Such issues are result of lack of resiliency in the architecture of the system. There is a need for an automated tool to be able to understand the service architecture and topology, inject faults, and assess the fault tolerance and resiliency of the system. In this paper, we present Screwdriver - a new automated solution developed at Groupon to address this need.
Calot: A Combinatorial Interaction Testing Tool
We report a software testing technique, called Calot, that is under development in our collaborative research project between academia and industry. The testing technique Calot is a model-based combinatorial interaction testing technique. The project aims to develop Calot, extending and improving existing techniques in several aspects for technology transfer, such as extension of the modeling language, improving test generation algorithms, devising priority notion, etc. Also aiming at technology transfer, we are developing a tool with user-friendly GUI (Graphical UserInterface) for the technique in collaboration between academia and industry. Also, in this collaboration, we are also conducting a case study, where we apply this technique and tool to embedded systems actually used in industry; we aim to derive empirical data from the case study for knowing the effectiveness of the proposed technique and tool. We also report our experiences of the collaborative research project between academia and industry.
14:15 - 15:00
Paper Session 4
Sahar Tahvili, Mehrdad Saadatmand, Stig Larsson, Wasif Afzal, Markus Bohlin and Daniel Sundmark
Dynamic Integration Test Selection Based on Test Case Dependencies
Prioritization, selection and minimization of test cases are well-known problems in software testing. Test case prioritization deals with the problem of ordering an existing set of test cases, typically with respect to the estimated likelihood of detecting faults. Test case selection addresses the problem of selecting a subset of an existing set of test cases, typically by discarding test cases that do not add any value in improving the quality of the software under test. Most existing approaches for test case prioritization and selection suffer from one or several drawbacks. For example, they to a large extent utilize static analysis of code for that purpose, making them unfit for higher levels of testing such as integration testing. Moreover, they do not exploit the possibility of dynamically changing the prioritization or selection of test cases based on the execution results of prior test cases. Such dynamic analysis allows for discarding test cases that do not need to be executed and are thus redundant. This paper proposes a generic method for prioritization and selection of test cases in integration testing that addresses the above issues. We also present the results of an industrial case study where initial evidence suggests the potential usefulness of our approach in testing a safety-critical train control management subsystem.
M. Evren Coşkun, M. Melta Ceylan, Kadir Yiğitözu and Vahid Garousi
A Tool for Automated Inspection of Software Design Documents and Its Empirical Evaluation in an Aviation Industry Setting
While software inspection is an effective activity to detect defects early in the software development lifecycle, it is an effort-intensive and error-prone activity. Motivated by a real need in the context of the Turkish Aerospace Industries Inc. (TAI), a tool named AutoInspect was developed to (semi-) automate the inspection of software design documents and, as a result, to increase the efficiency and effectiveness of the inspection process. We present in this paper the features of the tool, its development details and its initial evaluation for inspecting the design documents of three real systems in the company. The results of the initial case-study reveal that the tool is indeed able to increase the inspections efficiency and effectiveness. In terms of efficiency, inspection engineers who used AutoInspect performed 41-50% more efficiently, for the three design documents under study, compared to the case when the tool was not used (i.e., manual inspections). In terms of effectiveness, compared to manual inspections, the automated approach found between 23-33% more defects in the three design documents under study. As the tool currently only provides partial automation, our efforts are currently underway to increase its automation level even further.
15:00 - 15:30
15:30 - 16:15
Paper Session 5
Paula Raappana, Soili Saukkoriipi, Ilkka Tervonen and Mika MäntyläThe Effect of Team Exploratory Testing– Experience Report from F-Secure
Practitioners have found exploratory testing (ET) to be cost effective in detecting defects. The team exploratory testing (TET) approach scales exploratory testing to team level. This paper reports the effectiveness of (TET), and the experiences of the participants of TET sessions. The research was carried at F-Secure Corporation, where two projects were investigated. The results show that the TET sessions have good effectiveness and higher efficiency than other testing methods in the company measured in number of defects detected. Furthermore, the TET sessions found more usability defects that other methods. The session participants saw benefits in especially in the joint discus-sion and learning of the target application. However, with respect to test effectiveness and efficiency we should be cautions as further studies are needed to compensate the limitations of this work.
Mark Micallef, Chris Porter and Andrea Borg
Do Exploratory Testers Need Formal Training? An Investigation Using HCI Techniques
Exploratory software testing is an activity which can be carried out by both untrained and formally trained testers. We personify the former as Carmen and the latter as George. In this paper, we outline a joint research exercise between industry and academia that contributes to the body of knowledge by (1) proposing a data gathering and processing methodology which leverages HCI techniques to characterise the differences in strategies utilised by Carmen and George when approaching an exploratory testing task; and (2) present the findings of an initial study amongst twenty participants, ten formally trained testers and another ten with no formal training. Our results shed light on the types of strategies used by each type of tester, how they are used, the effectiveness of each type of strategy in terms of finding bugs, and the types of bugs each tester/strategy combination uncovers. We also demonstrate how our methodology can be used to help assemble and manage exploratory testing teams in the real world.
16:15 - 16:45
Keynote 2: The Future of TestingGerd Weishaar (Tricentis GmbH, Austria)
Abstract: The new wave of Digital Disruption forces customers to exploit IT in order to both create new sources of customer value as well as increase their operational agility. As a result, agile methodologies and DevOps need to be adopted to help reduce development cycle times significantly. How is software testing affected? How will the test organization of tomorrow look like? What is the impact on service providers, test approaches and tools? Gerd Weishaar, VP of Product Management at Tricentis, shares his perspective on an IT world where Continuous Testing dictates the future.
Bio: Dr. Gerd Weishaar has over 20 years of experience in product development and product management from various software companies, including Segue Software, Rational Software and IBM. Prior to joining Tricentis, Dr. Weishaar was responsible for the product management of Update Software, a leading vendor in the field of Customer Relationship Management (CRM) software. As VP of Product Management at Tricentis, Dr. Weishaar currently manages a team of experts who have a strong focus on market demands and needs. His team is responsible for managing the entire product life-cycle of the Tricentis Tosca Testsuite and product marketing related content and communication.
16:45 - 17:00
Best Paper & Closing