Difference between revisions of "HSST 2015"

From CERES
Jump to: navigation, search
Line 113: Line 113:
  
 
To apply to the summer school, please send an email to Veronica.Gaspes@hh.se with "Halmstad Summer School on Testing" in the title.
 
To apply to the summer school, please send an email to Veronica.Gaspes@hh.se with "Halmstad Summer School on Testing" in the title.
If you have any dietary requirements, or would like to attend only certain days of the summer school, please specify in your email text.  
+
If you have any '''dietary requirements''', or would like to attend only certain days of the summer school, please specify in your email text.  
 
   
 
   
 +
After sending the registration email, you will receive a confirmation. Please proceed to payment after you receive the confirmation email.
 +
If your registration is confirmed, please use the PayPal page (to be provided in March 2015).
  
The deadline for registration is April 15, 2015.  
+
 
 +
The registration fee is 2000 SEK (approx. 225 EUR, ) and covers lunches, coffee breaks, the social event,  and the study material but does '''not''' include the social dinner.
 +
The ticket to the social dinner costs 500 SEK and can be requested  upon registration (please indicate in your email).
 +
The deadline for registration is April 15, 2014.  
 
After sending the registration email, you will receive a confirmation. Please proceed to payment after you receive the confirmation email.  
 
After sending the registration email, you will receive a confirmation. Please proceed to payment after you receive the confirmation email.  
  
If your registration is confirmed, please use the PayPal page (to be provided in March 2015).
+
 
 +
==Accommodation==
 +
Here are some suggestions for the accommodation, with an indication of their price range, (obtained from booking.com) and their distance to the summer school venue:
 +
 
 +
* Scandic Hallandia (~160-200 EUR / night, 2km)
 +
* Hotel Amadeus (~100-120 EUR / night, 2.5km)
 +
* First Hotel Martenson (~130-150 EUR / night, 2km)
 +
* Quality Hotel Halmstad (~80-100 EUR / night, 3 km)
 +
* STF Halmstads Hostel Kaptenshamn (~80-100 EUR / night, 2km)
 +
 
 +
An annotated Google Map with suggestion for restaurants can be found [https://mapsengine.google.com/map/edit?mid=zg466yRKr6MA.khE1r5rg8V64 here].
 +
 
 +
=History=
 +
This is the fifth edition of the summer school.
 +
Information about the previous editions is provided below.
 +
 
 +
* [[HSST 2014]] (June 9- June 12, 2014),
 +
* [[HSST 2013]] (June 3- June 5, 2013),
 +
* [http://blog.accurate-programming.org/2012/05/second-summer-school-on-accurate.html Second Summer School on Accurate Programming] (May 30 - June 1, 2012)
 +
 
 +
 
 +
=Organizers=
 +
 
 +
Please do not hesitate to contact us if you have any questions or enquiries:
 +
 
 +
* Veronica Gaspes (Organization Chair,  veronica.gaspes@hh.se)
 +
* Mohammad Mousavi (Program Co-Chair, m.r.mousavi@hh.se)
 +
* Eva Nestius (Local Organization)
 +
* Walid Taha (Program Co-Chair, walid.taha@hh.se)
 +
 
 +
 
 +
[[File:hsst_2014_photo2.jpg|400px|thumb|super|HSST 2014 Second Group Photo.]]

Revision as of 17:23, 28 January 2015

5th Halmstad Summer School on Testing, HSST 2015

June 8-11, 2015 - Halmstad University, Sweden


Introduction

Testing and debugging account for a major part of software development cost and effort, yet the current practice of software testing is often insufficiently structured and disciplined. There have been various attempts in the past decades to bring more rigour and structure into this field, resulting in several industrial-strength processes, techniques and tools for different levels of testing. The 5th Halmstad Summer School on Testing provides an overview of the state of the art in testing, including theory, industrial cases, tools and hands-on tutorials by internationally-renowned researchers.

Tutorials

Taking Search-Based Software Testing to the Real-World, Rober Feldt

Since the late 1990's a lot of research and hype has gone into the area of search-based software engineering (SBSE) and in particular search-based software testing (SBST). The basic idea is to use (cheap) CPU's rather than (expensive) humans: by formulating our SE problems as optimization problems and apply search to find solutions. Thus, the human engineers guide and direct the search and judge when we have reached something "good enough" but would act less as the main driving designers. As an example from testing, it is clear that we want to create tests that find faults and cover all "parts" of the software, but it can be very hard to create a test suite that accomplishes this. If we formulate the problem as "find the test suite of 10 test cases that gives the maximum coverage" and we have a way to measure the sought-for coverage we can then search for the test suite that is best.

Even though the area of SBST has made excellent progress in the last 15 years there is a lack of practically useful tools. A major problem is that research has focused on generating test cases with numerical inputs. In this tutorial I will present our work on building more real-world useful SBST tools that can generate complex inputs and I will show examples and let you try out the tools in practice. I will also show how an integration of many existing technologies is needed to make these tools practically useful, in particular ways to present the search progress and results as well as allowing the user to interact and direct the search. I will also touch on how these technologies can be used for testing non-functional properties such as performance and robustness.

Differential and Multi-Version Program Verification, Shuvendu Lahiri

Program verification has made a lot of progress the last few decades. However, fundamental problems remain that undermine the wide-spread adoption of these powerful techniques: including (a) the need for the users to write well-formed specifications, (b) infer/write complex program-specific invariants, (c) (modeling) for underspecified environments. Absence of such leads to numerous false (often dumb) warnings that hinder initial adoption in the hands of developers. Differential or multi-version program verification has the potential to tolerate many of these shortcomings of (single version) program verification, at the same time offering new assurance. There are several opportunities for differential analysis, including (a) performing incremental analysis, (b) use one program as a spec to provide (weaker) relative correctness, (c) check new class of differential properties (beyond equivalence) and (d) exploit structural similarity to use more scalable abstractions and invariant inference. The talk will summarize the current state of research in multi-version program analysis and verification. We will introduce the formalisms for multi-version verification, including mutual summaries as mechanisms to state relative specifications. We will describe applications of differential reasoning for equivalence checking of compiler versions, verifying correctness of bug fixes, and ranking verification warnings.

An outline of some of the papers can be found on the SymDiff webpage: http://research.microsoft.com/en-us/projects/symdiff/default.aspx

Learning-based Testing of Procedural and Reactive Systems, Karl Meinke

Learning-based testing (LBT) is a new paradigm for black-box requirements testing that is based on combining machine learning with model checking. The basic idea is to incrementally reverse engineer an abstract model of a system under test (SUT) by using machine learning techniques applied to black-box test cases and their results. Each abstract model M_i is then model checked against a user requirement R, and any counterexamples to R are submitted as new test cases to the SUT, to establish whether they are true or false negatives. True negatives are requirements failures in the SUT, while false negatives refine the model to M_i+1 on the next learning cycle. In this way, a sequence of models M_0, M_1, … is generated that gradually converges to the SUT, where M_0 is the so called "null hypothesis".

Test verdict generation (pass/fail/warning) is fully automatic, based on a simple equality test. So a high degree of test automation is achieved. In practice many thousands of test cases per hour can be executed, with greater effectiveness than random testing.

LBT is a general paradigm that can be applied to any class of software systems for which there exist efficient machine learning and model checking algorithms. We can illustrate this generality with early research on testing procedural "C"-style programs against Hoare style pre- and postconditions. Of considerable recent interest has been the application of LBT to testing reactive systems, based on the extensive knowledge we have about automata learning algorithms and temporal logic model checkers.

Model-based Testing of Embedded Real-time Systems under Uncertainty, Brian Nielsen

Model-based analysis and testing is a promising technique for improving the quality of testing by automatically generating an efficient set of provably valid test cases from a system model. Testing of real-time properties of embedded systems is challenging because it must deal with timing, concurrency, processing and computation of complex mixed discrete and continuous signals, and limited observation and control. Several model-based techniques and tools have been proposed, but few deals systematically with models capturing the indeterminacy resulting from concurrency, timing and limited observability and controllability.

This lecture proposes a number of model-based test generation principles and techniques that aim at efficient testing of timed systems under uncertainty. In particular, the lecture will outline an input/output conformance testing theory for real-time systems, introduce Uppaal timed automata and tool-suite for timed modelling, verification, and test generation, introduce algorithms and tools for online and offline test generation, and propose timed games to model uncertainty and non-determinism.


Model-based Testing of Software Product Lines, Ina Schaefer

Software more and more pervades our everyday lives. Hence, we have high requirements towards the trustworthiness of the software. Software testing greatly contributes to the quality assurance of modern software systems. However, as today’s software system get more and more complex and exist in many different variants, we need rigorous and systematic approaches towards software testing. In this tutorial, we, first, present model-based testing as an approach for systematic test case generation, test execution and test result evaluation for single system testing. The central idea of model-based testing is to base all testing activities on an executable model-based test specification. Second, we consider model-based testing for variant-rich software systems and review two model-based software product line testing techniques. Sample-based testing generates a set of representative variants for testing, and variability-aware product line testing uses a family-based test model which contains the model-based specification of all considered product variants.

Automated Fault Prediction: The Ins, The Outs, The Ups, The Downs, Elaine Weyuker

It would obviously be very valuable to know in advance which files in the next release of a large software system are most likely to contain the largest numbers of defects. To accomplish this, we developed a negative binomial regression model and used it to predict the expected number of bugs in each file of the next release of a software system. The predictions are based on code characteristics and bug and modification history data. We will discuss what we have learned from applying the model to make predictions for 170 releases of nine large industrial systems, each with multiple years of field exposure, and tell you about our success in making these predictions. We will also describe our automated tool, and discuss some of the lessons learned and issues that had to be dealt with.

Although our Standard Model for fault prediction has performed very well for predicting fault-prone files, we have investigated several additions to the standard set of predictor variables. We built models that included the following variations:

  • calling structure of the software system
  • counts of the number of developers involved with the software
  • fine-grained counts of changes made to the software.

We will describe these variations and their impact on the prediction results, as well as an attempt to assess the influence of individual programmers on the fault-proneness of files.

This is joint work with Thomas Ostrand.

Speakers

Rance Cleaveland

Robert Feldt

Robert Feldt is a professor of software engineering at Blekinge Institute of Technology, Sweden and at Chalmers University of Technology, Sweden. He has also worked as an IT and software consultant for more than 20 years helping companies with strategic decisions and technical innovation. His research interests include human-centered software engineering, software testing and verification and validation, automated software engineering, requirements engineering and user experience. He is considered one of the early pioneers of search-based software engineering. Most of his research is empirical and conducted in close collaboration with industry partners in Sweden and globally. He received a Ph.D. (Techn. Dr.) in computer engineering from Chalmers University of Technology in 2002. He is a member of the Steering Committee for the International Conference on Software Testing, Verification and Validation (ICST).

Shuvendu Lahiri

Dr. Shuvendu Lahiri is a senior researcher in the Research in Software Engineering (RiSE) team at Microsoft Research, Redmond. He is interested in rigorous methods for improving the analysis of software. His current focus lies in using symbolic methods to perform differential static analysis and semantic methods to prioritize warnings from program verifiers. He has worked on logics and decision procedures, algorithms for dealing with heap-manipulating programs, abstraction techniques for hardware and software. Shuvendu holds a PhD from Carnegie Mellon University and a B.Tech. from IIT Kharagpur. His PhD dissertation on indexed predicate abstraction received the ACM SIGDA outstanding thesis award. More information can be found at http://research.microsoft.com/~shuvendu.

Karl Meinke

Karl Meinke has worked at the Universities of Leeds, Manchester, Swansea, ETH Zurich and TU Munich. He has been a Professor at KTH Royal Institute of Technology in the School of Computer Science and Communications since 2001. During his career he has focussed on different aspects of formal methods for software development, including learning-based testing since 2004. He was Founder Secretary of the European Association for Computer Science Logic (EACSL). He is a regular PC member for the testing conferences TAP (Tests and Proofs) and ICTSS (Int. Conf. on Testing Software and Systems). He is also a member of the ISTQB (Int. Software Testing Qualifications Board) working group on curriculum development for model-based testing.

Brian Nielsen

Brian Nielsen (M.Sc., Ph.D.) is an Associate Professor in Computer Science from Aalborg University, Denmark. His research interests include distributed, embedded, and cyber-physical systems and software, model-based development and test generation for real-time embedded systems, verification techniques and tools, and industrial applications. He is involved in numerous European projects related to model-based testing, recently the industrially driven Artemis MBAT (Combined Model-based Analysis and Testing of Embedded Systems). He is a co-designer of the testing tool components of the Uppaal tool-suite for verification and validation of rea-time systems.


Ina Schaefer

Since April 2012, Ina Schaefer is full professor and head of the Institute of Software Engineering and Automotive Informatics at Technische Universität Braunschweig, Germany. She received her PhD in 2008 at Technische Universität Kaiserslautern and was 2010-2011 Postdoc at Chalmers University, Gothenburg, Sweden. Her research interests include the modular modeling and implementation of variant-roch software systems, as well as efficient quality assurance procedures for software variants and versions. She initiated the workshop series " Formal Methods for Software Product Line Engineering " and is PC co-chair of FASE'15. She is co-author of over 70 technical publications in relevant journals and conferences.

Elaine Weyuker

Elaine Weyuker is a Visiting Professor Mälardalen University in Västerås, Sweden and an independent researcher and consultant. Previously, she worked as a Fellow and Distinguished Member of the Technical Staff at AT&T Labs, a Professor of Computer Science at the Courant Institute of Mathematical Sciences of New York University, a Lecturer at the City University of New York, a Systems Engineer at IBM, and a programmer at Texaco. Her research focuses on empirical software engineering and techniques to build highly reliable and dependable software. Much of her recent research involved designing techniques, building statistical models and developing a tool to automatically predict which files of large industrial software systems are most likely to contain bugs, for which she holds a patent.

She is a member of the U.S. National Academy of Engineering, an IEEE Fellow, an ACM Fellow and an AT&T Fellow. She is the author of 170 refereed publications as well as several books and book chapters. Among her awards are the 2012 US President's volunteer service award, the 2010 ACM President's Award, the 2008 Anita Borg Institute Technical Leadership Award for Outstanding Research and Technical Leadership, the 2007 ACM SIGSOFT Outstanding Research Award, the 2004 IEEE Computer Society Harlan D. Mills Award, the Rutgers University 50th Anniversary Outstanding Alumni Award, the 2001 YWCA Woman of Achievement Award, and the 2004 AT&T Chairman's Diversity Award. She served as the chair of the ACM Council on Women in Computing (ACM-W) from 2004 - 2012 and has been a member of the Coalition to Diversify Computing's Executive Committee for many years.


Registration

The registration deadline is April 15, 2015.


To apply to the summer school, please send an email to Veronica.Gaspes@hh.se with "Halmstad Summer School on Testing" in the title. If you have any dietary requirements, or would like to attend only certain days of the summer school, please specify in your email text.

After sending the registration email, you will receive a confirmation. Please proceed to payment after you receive the confirmation email. If your registration is confirmed, please use the PayPal page (to be provided in March 2015).


The registration fee is 2000 SEK (approx. 225 EUR, ) and covers lunches, coffee breaks, the social event, and the study material but does not include the social dinner. The ticket to the social dinner costs 500 SEK and can be requested upon registration (please indicate in your email). The deadline for registration is April 15, 2014. After sending the registration email, you will receive a confirmation. Please proceed to payment after you receive the confirmation email.


Accommodation

Here are some suggestions for the accommodation, with an indication of their price range, (obtained from booking.com) and their distance to the summer school venue:

  • Scandic Hallandia (~160-200 EUR / night, 2km)
  • Hotel Amadeus (~100-120 EUR / night, 2.5km)
  • First Hotel Martenson (~130-150 EUR / night, 2km)
  • Quality Hotel Halmstad (~80-100 EUR / night, 3 km)
  • STF Halmstads Hostel Kaptenshamn (~80-100 EUR / night, 2km)

An annotated Google Map with suggestion for restaurants can be found here.

History

This is the fifth edition of the summer school. Information about the previous editions is provided below.


Organizers

Please do not hesitate to contact us if you have any questions or enquiries:

  • Veronica Gaspes (Organization Chair, veronica.gaspes@hh.se)
  • Mohammad Mousavi (Program Co-Chair, m.r.mousavi@hh.se)
  • Eva Nestius (Local Organization)
  • Walid Taha (Program Co-Chair, walid.taha@hh.se)


HSST 2014 Second Group Photo.