HSST 2015
5th Halmstad Summer School on Testing, HSST 2015
June 8-11, 2015 - Halmstad University, Sweden
Contents
- 1 Introduction
- 2 Tutorials
- 2.1 Approximate Formal Verification Using Model-Based Testing, Rance Cleaveland
- 2.2 Taking Search-Based Software Testing to the Real-World, Robert Feldt
- 2.3 Differential and Multi-Version Program Verification, Shuvendu Lahiri
- 2.4 Learning-based Testing of Procedural and Reactive Systems, Karl Meinke
- 2.5 Model-based Testing of Embedded Real-time Systems under Uncertainty, Brian Nielsen
- 2.6 Model-based Testing of Software Product Lines, Ina Schaefer
- 2.7 Automated Fault Prediction: The Ins, The Outs, The Ups, The Downs, Elaine Weyuker
- 3 Speakers
- 4 Tentative Program
- 5 Registration
- 6 Venue
- 7 History
- 8 Organizers
Introduction
Testing and debugging account for a major part of software development cost and effort, yet the current practice of software testing is often insufficiently structured and disciplined. There have been various attempts in the past decades to bring more rigour and structure into this field, resulting in several industrial-strength processes, techniques and tools for different levels of testing. The 5th Halmstad Summer School on Testing provides an overview of the state of the art in testing, including theory, industrial cases, tools and hands-on tutorials by internationally-renowned researchers.
Tutorials
Approximate Formal Verification Using Model-Based Testing, Rance Cleaveland
In model-based testing, (semi-)formal models of systems are used to drive the derivation of test cases to be applied to the system-under-test (SUT). The technology has long been a part of the traditional hardware-design workflows, and it is beginning to find application in embedded-software development processes also. In automotive and land-vehicle control-system design in particular, models in languages such as MATLAB(r) / Simulink(r) / Stateflow(r) are used to drive the testing of the software used to control vehicle behavior, with tools like Reactis(r), developed by a team including the speaker, providing automated test-case generation support for this endeavor.
This tutorial will discuss how test-case generation capabilities may also be used to help verify that models meet formal specifications of their behavior. The method we advocate, Instrumentation-Based Verification (IBV), involves the formalizaton of behavior specifications as models that are used to instrument the model to be verified, and the use of coverage testing of the instrumented model to search for specification violations. The presentation will discuss the foundations of IBV, the test-generation approach and other features in Reactis that are used to support IBV, and the results of several case studies involving the use of the methods.
Taking Search-Based Software Testing to the Real-World, Robert Feldt
Since the late 1990's a lot of research and hype has gone into the area of search-based software engineering (SBSE) and in particular search-based software testing (SBST). The basic idea is to use (cheap) CPU's rather than (expensive) humans: by formulating our SE problems as optimization problems and apply search to find solutions. Thus, the human engineers guide and direct the search and judge when we have reached something "good enough" but would act less as the main driving designers. As an example from testing, it is clear that we want to create tests that find faults and cover all "parts" of the software, but it can be very hard to create a test suite that accomplishes this. If we formulate the problem as "find the test suite of 10 test cases that gives the maximum coverage" and we have a way to measure the sought-for coverage we can then search for the test suite that is best.
Even though the area of SBST has made excellent progress in the last 15 years there is a lack of practically useful tools. A major problem is that research has focused on generating test cases with numerical inputs. In this tutorial I will present our work on building more real-world useful SBST tools that can generate complex inputs and I will show examples and let you try out the tools in practice. I will also show how an integration of many existing technologies is needed to make these tools practically useful, in particular ways to present the search progress and results as well as allowing the user to interact and direct the search. I will also touch on how these technologies can be used for testing non-functional properties such as performance and robustness.
Differential and Multi-Version Program Verification, Shuvendu Lahiri
Program verification has made a lot of progress the last few decades. However, fundamental problems remain that undermine the wide-spread adoption of these powerful techniques: including (a) the need for the users to write well-formed specifications, (b) infer/write complex program-specific invariants, (c) (modeling) for underspecified environments. Absence of such leads to numerous false (often dumb) warnings that hinder initial adoption in the hands of developers. Differential or multi-version program verification has the potential to tolerate many of these shortcomings of (single version) program verification, at the same time offering new assurance. There are several opportunities for differential analysis, including (a) performing incremental analysis, (b) use one program as a spec to provide (weaker) relative correctness, (c) check new class of differential properties (beyond equivalence) and (d) exploit structural similarity to use more scalable abstractions and invariant inference. The talk will summarize the current state of research in multi-version program analysis and verification. We will introduce the formalisms for multi-version verification, including mutual summaries as mechanisms to state relative specifications. We will describe applications of differential reasoning for equivalence checking of compiler versions, verifying correctness of bug fixes, and ranking verification warnings.
An outline of some of the papers can be found on the SymDiff webpage: http://research.microsoft.com/en-us/projects/symdiff/default.aspx
Learning-based Testing of Procedural and Reactive Systems, Karl Meinke
Learning-based testing (LBT) is a new paradigm for black-box requirements testing that is based on combining machine learning with model checking. The basic idea is to incrementally reverse engineer an abstract model of a system under test (SUT) by using machine learning techniques applied to black-box test cases and their results. Each abstract model M_i is then model checked against a user requirement R, and any counterexamples to R are submitted as new test cases to the SUT, to establish whether they are true or false negatives. True negatives are requirements failures in the SUT, while false negatives refine the model to M_i+1 on the next learning cycle. In this way, a sequence of models M_0, M_1, … is generated that gradually converges to the SUT, where M_0 is the so called "null hypothesis".
Test verdict generation (pass/fail/warning) is fully automatic, based on a simple equality test. So a high degree of test automation is achieved. In practice many thousands of test cases per hour can be executed, with greater effectiveness than random testing.
LBT is a general paradigm that can be applied to any class of software systems for which there exist efficient machine learning and model checking algorithms. We can illustrate this generality with early research on testing procedural "C"-style programs against Hoare style pre- and postconditions. Of considerable recent interest has been the application of LBT to testing reactive systems, based on the extensive knowledge we have about automata learning algorithms and temporal logic model checkers.
PDF Slides Part I, PDF Slides Part II, PDF Slides Part III, PDF Slides Part IV
Model-based Testing of Embedded Real-time Systems under Uncertainty, Brian Nielsen
Model-based analysis and testing is a promising technique for improving the quality of testing by automatically generating an efficient set of provably valid test cases from a system model. Testing of real-time properties of embedded systems is challenging because it must deal with timing, concurrency, processing and computation of complex mixed discrete and continuous signals, and limited observation and control. Several model-based techniques and tools have been proposed, but few deals systematically with models capturing the indeterminacy resulting from concurrency, timing and limited observability and controllability.
This lecture proposes a number of model-based test generation principles and techniques that aim at efficient testing of timed systems under uncertainty. In particular, the lecture will outline an input/output conformance testing theory for real-time systems, introduce Uppaal timed automata and tool-suite for timed modelling, verification, and test generation, introduce algorithms and tools for online and offline test generation, and propose timed games to model uncertainty and non-determinism.
Slides, exercises, and other material
Model-based Testing of Software Product Lines, Ina Schaefer
Software more and more pervades our everyday lives. Hence, we have high requirements towards the trustworthiness of the software. Software testing greatly contributes to the quality assurance of modern software systems. However, as today’s software system get more and more complex and exist in many different variants, we need rigorous and systematic approaches towards software testing. In this tutorial, we, first, present model-based testing as an approach for systematic test case generation, test execution and test result evaluation for single system testing. The central idea of model-based testing is to base all testing activities on an executable model-based test specification. Second, we consider model-based testing for variant-rich software systems and review two model-based software product line testing techniques. Sample-based testing generates a set of representative variants for testing, and variability-aware product line testing uses a family-based test model which contains the model-based specification of all considered product variants.
PDF Slides Part I, PDF Slides Part II
Automated Fault Prediction: The Ins, The Outs, The Ups, The Downs, Elaine Weyuker
It would obviously be very valuable to know in advance which files in the next release of a large software system are most likely to contain the largest numbers of defects. To accomplish this, we developed a negative binomial regression model and used it to predict the expected number of bugs in each file of the next release of a software system. The predictions are based on code characteristics and bug and modification history data. We will discuss what we have learned from applying the model to make predictions for 170 releases of nine large industrial systems, each with multiple years of field exposure, and tell you about our success in making these predictions. We will also describe our automated tool, and discuss some of the lessons learned and issues that had to be dealt with.
Although our Standard Model for fault prediction has performed very well for predicting fault-prone files, we have investigated several additions to the standard set of predictor variables. We built models that included the following variations:
- calling structure of the software system
- counts of the number of developers involved with the software
- fine-grained counts of changes made to the software.
We will describe these variations and their impact on the prediction results, as well as an attempt to assess the influence of individual programmers on the fault-proneness of files.
This is joint work with Thomas Ostrand.
Speakers
Rance Cleaveland
Rance Cleaveland is Professor of Computer Science at the University of Maryland at College Park, where he has also served as Director of the Fraunhofer USA Center for Experimental and Software Engineering. Prior to joining the Maryland faculty in 2005, he held professorships at the State University of New York at Stony Brook and at North Carolina State University. He also co-founded Reactive Systems, Inc., in 1999 to commercialize tools for model-based testing of embedded software; Reactive Systems currently has numerous customers worldwide in the automotive and aerospace industries. In 1992 he received Young Investigator Awards from the National Science Foundation and from the Office of Naval Research, and in 1994 he was awarded the Alcoa Engineering Research Achievement prize at North Carolina State University. He has published over 125 papers in the areas of software verification and validation, formal methods, model checking, software specification formalisms, model-based testing, and verification tools. Cleaveland received B.S. degrees in Mathematics and Computer Science from Duke University in 1982 and his M.S. and Ph.D. degrees from Cornell University in 1985 and 1987 respectively.
Robert Feldt
Robert Feldt is a professor of software engineering at Blekinge Institute of Technology, Sweden and at Chalmers University of Technology, Sweden. He has also worked as an IT and software consultant for more than 20 years helping companies with strategic decisions and technical innovation. His research interests include human-centered software engineering, software testing and verification and validation, automated software engineering, requirements engineering and user experience. He is considered one of the early pioneers of search-based software engineering. Most of his research is empirical and conducted in close collaboration with industry partners in Sweden and globally. He received a Ph.D. (Techn. Dr.) in computer engineering from Chalmers University of Technology in 2002. He is a member of the Steering Committee for the International Conference on Software Testing, Verification and Validation (ICST).
Shuvendu Lahiri
Dr. Shuvendu Lahiri is a senior researcher in the Research in Software Engineering (RiSE) team at Microsoft Research, Redmond. He is interested in rigorous methods for improving the analysis of software. His current focus lies in using symbolic methods to perform differential static analysis and semantic methods to prioritize warnings from program verifiers. He has worked on logics and decision procedures, algorithms for dealing with heap-manipulating programs, abstraction techniques for hardware and software. Shuvendu holds a PhD from Carnegie Mellon University and a B.Tech. from IIT Kharagpur. His PhD dissertation on indexed predicate abstraction received the ACM SIGDA outstanding thesis award. More information can be found at http://research.microsoft.com/~shuvendu.
Karl Meinke
Karl Meinke has worked at the Universities of Leeds, Manchester, Swansea, ETH Zurich and TU Munich. He has been a Professor at KTH Royal Institute of Technology in the School of Computer Science and Communications since 2001. During his career he has focussed on different aspects of formal methods for software development, including learning-based testing since 2004. He was Founder Secretary of the European Association for Computer Science Logic (EACSL). He is a regular PC member for the testing conferences TAP (Tests and Proofs) and ICTSS (Int. Conf. on Testing Software and Systems). He is also a member of the ISTQB (Int. Software Testing Qualifications Board) working group on curriculum development for model-based testing.
Brian Nielsen
Brian Nielsen (M.Sc., Ph.D.) is an Associate Professor in Computer Science from Aalborg University, Denmark. His research interests include distributed, embedded, and cyber-physical systems and software, model-based development and test generation for real-time embedded systems, verification techniques and tools, and industrial applications. He is involved in numerous European projects related to model-based testing, recently the industrially driven Artemis MBAT (Combined Model-based Analysis and Testing of Embedded Systems). He is a co-designer of the testing tool components of the Uppaal tool-suite for verification and validation of rea-time systems.
Ina Schaefer
Since April 2012, Ina Schaefer is full professor and head of the Institute of Software Engineering and Automotive Informatics at Technische Universität Braunschweig, Germany. She received her PhD in 2008 at Technische Universität Kaiserslautern and was 2010-2011 Postdoc at Chalmers University, Gothenburg, Sweden. Her research interests include the modular modeling and implementation of variant-rich software systems, as well as efficient quality assurance procedures for software variants and versions. She initiated the workshop series " Formal Methods for Software Product Line Engineering " and is PC co-chair of FASE'15. She is co-author of over 70 technical publications in relevant journals and conferences.
Elaine Weyuker
Elaine Weyuker is a Visiting Professor Mälardalen University in Västerås, Sweden and an independent researcher and consultant. Previously, she worked as a Fellow and Distinguished Member of the Technical Staff at AT&T Labs, a Professor of Computer Science at the Courant Institute of Mathematical Sciences of New York University, a Lecturer at the City University of New York, a Systems Engineer at IBM, and a programmer at Texaco. Her research focuses on empirical software engineering and techniques to build highly reliable and dependable software. Much of her recent research involved designing techniques, building statistical models and developing a tool to automatically predict which files of large industrial software systems are most likely to contain bugs, for which she holds a patent.
She is a member of the U.S. National Academy of Engineering, an IEEE Fellow, an ACM Fellow and an AT&T Fellow. She is the author of 170 refereed publications as well as several books and book chapters. Among her awards are the 2012 US President's volunteer service award, the 2010 ACM President's Award, the 2008 Anita Borg Institute Technical Leadership Award for Outstanding Research and Technical Leadership, the 2007 ACM SIGSOFT Outstanding Research Award, the 2004 IEEE Computer Society Harlan D. Mills Award, the Rutgers University 50th Anniversary Outstanding Alumni Award, the 2001 YWCA Woman of Achievement Award, and the 2004 AT&T Chairman's Diversity Award. She served as the chair of the ACM Council on Women in Computing (ACM-W) from 2004 - 2012 and has been a member of the Coalition to Diversify Computing's Executive Committee for many years.
Tentative Program
June 8 (9:00-17:00)
09:00-09:15 Arrival and Registration
09:15-09:30 Welcome, opening remarks (Mohammad Mousavi, Halmstad)
09:30-10:30 Model-based Testing of Embedded Real-time Systems under Uncertainty, Part I (Brian Nielsen, Aalborg)
10:30-11:00 Coffee Break
11:00-12:30 Model-based Testing of Embedded Real-time Systems under Uncertainty, Part I (Brian Nielsen, Aalborg)
12:30-14:00 Lunch Break
14:00-15:30 Differential and Multi-Version Program Verification, Part I (Shuvendu Lahiri, Microsoft)
15:30-16:00 Coffee Break
16:00-17:00 Differential and Multi-Version Program Verification, Part II (Shuvendu Lahiri, Microsoft)
June 9 (9:30-17:00)
09:30-10:30 Model-based Testing of Software Product Lines, Part I (Ina Schaefer, Braunschweig)
10:30-11:00 Coffee Break
11:00-12:30 Model-based Testing of Software Product Lines, Part II (Ina Schaefer, Braunschweig)
12:30-14:00 Lunch Break
14:00-15:30 Learning-based Testing of Procedural and Reactive Systems, Part I (Karl Meinke, KTH)
15:30-16:00 Coffee Break
16:00-17:00 Learning-based Testing of Procedural and Reactive Systems, Part II (Karl Meinke, KTH)
June 10 (9:30-12:30)
09:30-10:30 Approximate Formal Verification Using Model-Based Testing, Part I (Rance Cleaveland, Maryland)
10:30-11:00 Coffee Break
11:00-12:30 Approximate Formal Verification Using Model-Based Testing, Part II (Rance Cleaveland, Maryland)
12:30-14:00 Lunch Break
14:00-17:00 Social Event
18:30-22:00 Social Dinner
June 11 (9:30-17:00)
09:30-10:30 Taking Search-Based Software Testing to the Real-World, Part I (Robert Feldt, BTH and Chalmers)
10:30-11:00 Coffee Break
11:00-12:30 Taking Search-Based Software Testing to the Real-World, Part I (Robert Feldt, BTH and Chalmers)
12:30-14:00 Lunch Break
14:00-15:30 Automated Fault Prediction: The Ins, The Outs, The Ups, The Downs, Part I (Elaine Weyuke, Mälardalen)
15:30-16:00 Coffee Break
16:00-17:00 Automated Fault Prediction: The Ins, The Outs, The Ups, The Downs, Part II (Elaine Weyuke, Mälardalen)
Registration
The registration deadline is April 15, 2015.
To apply to the summer school, please send an email to Veronica.Gaspes@hh.se with "Halmstad Summer School on Testing" in the title.
If you have any dietary requirements, or would like to attend only certain days of the summer school, please specify in your email text.
After sending the registration email, you will receive a confirmation. Please proceed to payment after you receive the confirmation email. If your registration is confirmed, please use the PayPal page (to be provided in March 2015).
The registration fee is 2300 SEK (approx. 250 EUR, ) and covers lunches, coffee breaks, and the study material but does not include the social event and dinner.
The ticket to the social event and dinner (excl. drinks) costs 500 SEK and can be requested upon registration (please indicate in your email).
The deadline for registration is April 15, 2015.
After sending the registration email, you will receive a confirmation. Please proceed to payment after you receive the confirmation email.
Venue
The workshop will be held on the campus of Halmstad University in Halmstad, Sweden. Halmstad is a popular holiday destination located on the Swedish west coast. Just a few minutes by bicycle or bus takes you from campus to city centre, sandy beaches or forested Galgberget Hill.
Directions to/in Campus
Directions for getting to campus can be found at http://www.hh.se/english/abouttheuniversity/visitus.307_en.html
The campus map can be found here. The workshop will be located at the Baertling lecture hall, which is located at the ground floor (1st floor in Swedish terms!) of the J building (house Visionen). Coffee breaks will be held at the ground floor of the same building.
Directions to Halmstad
Trains take you directly to Göteborg in 75 minutes, to the Malmö-Copenhagen area in about 2 hours and to Stockholm in 4.5 hours. There are also daily flights from Halmstad Airport to Stockholm.
If you are flying internationally it is generally easiest to fly into Copenhagen (CPH) airport (also known as Kastrup). The best thing about flying into CPH is that you just buy a train ticket when you arrive at the airport and simply take a train from the airport directly to Halmstad. The train leaves from the airport itself approximately once an hour on weekdays. We recommend that you check the time-table at the Swedish Railways site
and allow one hour from touchdown to getting to the train station (just outside customs).
(It seems that you can take an earlier or a later train on the same day regardless of the exact train you booked, but obviously you will lose your seat reservation if you have made one.) To get to your hotel, you can combine a taxi booking with your train ticket at the Swedish Railways site and a driver will wait for you with your name mentioned on a board once you arrive in Halmstad.
In Halmstad, everything is either in walking distance or a short taxi ride away. Usually there are taxis at the station. If there are none there is a phone that connects directly to the local taxi company. For the eventuality that the phone is not working, it is good to have a cell phone handy. The number for the taxi company is written on the phone.
Note that CPH is in Denmark (and not in Sweden). So, if you need visas for European countries, make sure you get one that works for both.
If for some reason you cannot or do not want to use CPH, the next best international airport is in Gothenburg (GOT), locally known as Landvetter. The tricky thing about using that airport is that you would first have to take a 45 minute shuttle from the airport to the Gothenburg train station, and then take the train to Halmstad. That is one transfer and one wait. You can buy a combined shuttle and train ticket from the the Swedish Railways site.
Accommodation
We have made a pre-booking for a group of rooms at Hotel Continental. The negotiated rate is 795 SEK per night (including breakfast) for a single room and 1095 for double room. If you would like to use this offer, please send an email to the hotel reservation and mention the booking code "HSST2015".
Here are some other suggestions for the accommodation, with an indication of their price range, (obtained from booking.com) and their distance to the summer school venue
- Scandic Hallandia (~160-200 EUR / night, 2km)
- Hotel Amadeus (~100-120 EUR / night, 2.5km)
- First Hotel Martenson (~130-150 EUR / night, 2km)
- Quality Hotel Halmstad (~80-100 EUR / night, 3 km)
- STF Halmstads Hostel Kaptenshamn (~80-100 EUR / night, 2km)
An annotated Google Map with suggestion for restaurants can be found here.
History
This is the fifth edition of the summer school. Information about the previous editions is provided below.
- HSST 2014 (June 9- June 12, 2014),
- HSST 2013 (June 3- June 5, 2013),
- Second Summer School on Accurate Programming (May 30 - June 1, 2012)
Organizers
Please do not hesitate to contact us if you have any questions or enquiries:
- Veronica Gaspes (Organization Chair, veronica.gaspes@hh.se)
- Mohammad Mousavi (Program Co-Chair, m.r.mousavi@hh.se)
- Eva Nestius (Local Organization)
- Walid Taha (Program Co-Chair, walid.taha@hh.se)