Difference between revisions of "HSST 2015"

From CERES
Jump to: navigation, search
Line 15: Line 15:
  
 
Even though the area of SBST has made excellent progress in the last 15 years there is a lack of practically useful tools. A major problem is that research has focused on generating test cases with numerical inputs. In this tutorial I will present our work on building more real-world useful SBST tools that can generate complex inputs and I will show examples and let you try out the tools in practice. I will also show how an integration of many existing technologies is needed to make these tools practically useful, in particular ways to present the search progress and results as well as allowing the user to interact and direct the search. I will also touch on how these technologies can be used for testing non-functional properties such as performance and robustness.
 
Even though the area of SBST has made excellent progress in the last 15 years there is a lack of practically useful tools. A major problem is that research has focused on generating test cases with numerical inputs. In this tutorial I will present our work on building more real-world useful SBST tools that can generate complex inputs and I will show examples and let you try out the tools in practice. I will also show how an integration of many existing technologies is needed to make these tools practically useful, in particular ways to present the search progress and results as well as allowing the user to interact and direct the search. I will also touch on how these technologies can be used for testing non-functional properties such as performance and robustness.
 +
 +
== Differential and Multi-Version Program Verification, Shuvendu Lahiri ==
 +
 +
Program verification has made a lot of progress the last few decades. However, fundamental problems remain that undermine the wide-spread adoption of these powerful techniques: including (a) the need for the users to write well-formed specifications, (b)  infer/write complex program-specific invariants, (c)  (modeling) for underspecified environments. Absence of such leads to numerous false (often dumb) warnings that hinder initial adoption in the hands of developers.
 +
Differential or multi-version program verification has the potential to tolerate many of these shortcomings of (single version) program verification, at the same time offering new assurance. There are several opportunities for differential analysis, including (a) performing incremental analysis, (b) use one program as a spec to provide (weaker) relative correctness, (c) check new class of differential properties (beyond equivalence) and (d) exploit structural similarity to use more scalable abstractions and invariant inference.
 +
The talk will summarize the current state of research in multi-version program analysis and verification. We will introduce the formalisms for multi-version verification, including mutual summaries as mechanisms to state relative specifications. We will describe applications of differential reasoning for equivalence checking of compiler versions, verifying correctness of bug fixes, and ranking verification warnings.
 +
 +
An outline of some of the papers can be found on the  SymDiff webpage:
 +
[http://research.microsoft.com/en-us/projects/symdiff/default.aspx http://research.microsoft.com/en-us/projects/symdiff/default.aspx]
  
 
== Model-based Testing of Software Product Lines, Ina Schaefer ==  
 
== Model-based Testing of Software Product Lines, Ina Schaefer ==  
Line 57: Line 66:
  
 
==Shuvendu Lahiri==
 
==Shuvendu Lahiri==
 +
 +
Dr. Shuvendu Lahiri is a senior researcher in the Research in Software Engineering (RiSE) team at Microsoft Research, Redmond. He is interested in rigorous methods for improving the analysis of software. His current focus lies in using symbolic methods to perform differential static analysis and semantic methods to prioritize warnings from program verifiers. He has worked on logics and decision procedures, algorithms for dealing with heap-manipulating programs, abstraction techniques for hardware and software. Shuvendu holds a PhD from Carnegie Mellon University and a B.Tech. from IIT Kharagpur. His PhD dissertation on indexed predicate abstraction received the ACM SIGDA outstanding thesis award. More information can be found at
 +
[http://research.microsoft.com/~shuvendu http://research.microsoft.com/~shuvendu].
 
   
 
   
 
==Karl Meinke ==
 
==Karl Meinke ==

Revision as of 04:06, 20 December 2014

5th Halmstad Summer School on Testing, HSST 2015

June 8-11, 2015 - Halmstad University, Sweden


Introduction

Testing and debugging account for a major part of software development cost and effort, yet the current practice of software testing is often insufficiently structured and disciplined. There have been various attempts in the past decades to bring more rigour and structure into this field, resulting in several industrial-strength processes, techniques and tools for different levels of testing. The 5th Halmstad Summer School on Testing provides an overview of the state of the art in testing, including theory, industrial cases, tools and hands-on tutorials by internationally-renowned researchers.

Tutorials

Taking Search-Based Software Testing to the Real-World, Rober Feldt

Since the late 1990's a lot of research and hype has gone into the area of search-based software engineering (SBSE) and in particular search-based software testing (SBST). The basic idea is to use (cheap) CPU's rather than (expensive) humans: by formulating our SE problems as optimization problems and apply search to find solutions. Thus, the human engineers guide and direct the search and judge when we have reached something "good enough" but would act less as the main driving designers. As an example from testing, it is clear that we want to create tests that find faults and cover all "parts" of the software, but it can be very hard to create a test suite that accomplishes this. If we formulate the problem as "find the test suite of 10 test cases that gives the maximum coverage" and we have a way to measure the sought-for coverage we can then search for the test suite that is best.

Even though the area of SBST has made excellent progress in the last 15 years there is a lack of practically useful tools. A major problem is that research has focused on generating test cases with numerical inputs. In this tutorial I will present our work on building more real-world useful SBST tools that can generate complex inputs and I will show examples and let you try out the tools in practice. I will also show how an integration of many existing technologies is needed to make these tools practically useful, in particular ways to present the search progress and results as well as allowing the user to interact and direct the search. I will also touch on how these technologies can be used for testing non-functional properties such as performance and robustness.

Differential and Multi-Version Program Verification, Shuvendu Lahiri

Program verification has made a lot of progress the last few decades. However, fundamental problems remain that undermine the wide-spread adoption of these powerful techniques: including (a) the need for the users to write well-formed specifications, (b) infer/write complex program-specific invariants, (c) (modeling) for underspecified environments. Absence of such leads to numerous false (often dumb) warnings that hinder initial adoption in the hands of developers. Differential or multi-version program verification has the potential to tolerate many of these shortcomings of (single version) program verification, at the same time offering new assurance. There are several opportunities for differential analysis, including (a) performing incremental analysis, (b) use one program as a spec to provide (weaker) relative correctness, (c) check new class of differential properties (beyond equivalence) and (d) exploit structural similarity to use more scalable abstractions and invariant inference. The talk will summarize the current state of research in multi-version program analysis and verification. We will introduce the formalisms for multi-version verification, including mutual summaries as mechanisms to state relative specifications. We will describe applications of differential reasoning for equivalence checking of compiler versions, verifying correctness of bug fixes, and ranking verification warnings.

An outline of some of the papers can be found on the SymDiff webpage: http://research.microsoft.com/en-us/projects/symdiff/default.aspx

Model-based Testing of Software Product Lines, Ina Schaefer

Software more and more pervades our everyday lives. Hence, we have high requirements towards the trustworthiness of the software. Software testing greatly contributes to the quality assurance of modern software systems. However, as today’s software system get more and more complex and exist in many different variants, we need rigorous and systematic approaches towards software testing. In this tutorial, we, first, present model-based testing as an approach for systematic test case generation, test execution and test result evaluation for single system testing. The central idea of model-based testing is to base all testing activities on an executable model-based test specification. Second, we consider model-based testing for variant-rich software systems and review two model-based software product line testing techniques. Sample-based testing generates a set of representative variants for testing, and variability-aware product line testing uses a family-based test model which contains the model-based specification of all considered product variants.

Automated Fault Prediction: The Ins, The Outs, The Ups, The Downs, Elaine Weyuker

It would obviously be very valuable to know in advance which files in the next release of a large software system are most likely to contain the largest numbers of defects. To accomplish this, we developed a negative binomial regression model and used it to predict the expected number of bugs in each file of the next release of a software system. The predictions are based on code characteristics and bug and modification history data. We will discuss what we have learned from applying the model to make predictions for 170 releases of nine large industrial systems, each with multiple years of field exposure, and tell you about our success in making these predictions. We will also describe our automated tool, and discuss some of the lessons learned and issues that had to be dealt with.

Although our Standard Model for fault prediction has performed very well for predicting fault-prone files, we have investigated several additions to the standard set of predictor variables. We built models that included the following variations:

  • calling structure of the software system
  • counts of the number of developers involved with the software
  • fine-grained counts of changes made to the software.

We will describe these variations and their impact on the prediction results, as well as an attempt to assess the influence of individual programmers on the fault-proneness of files.

This is joint work with Thomas Ostrand.

Speakers

Rance Cleaveland

Robert Feldt

Robert Feldt is a professor of software engineering at Blekinge Institute of Technology, Sweden and at Chalmers University of Technology, Sweden. He has also worked as an IT and software consultant for more than 20 years helping companies with strategic decisions and technical innovation. His research interests include human-centered software engineering, software testing and verification and validation, automated software engineering, requirements engineering and user experience. He is considered one of the early pioneers of search-based software engineering. Most of his research is empirical and conducted in close collaboration with industry partners in Sweden and globally. He received a Ph.D. (Techn. Dr.) in computer engineering from Chalmers University of Technology in 2002. He is a member of the Steering Committee for the International Conference on Software Testing, Verification and Validation (ICST).

Shuvendu Lahiri

Dr. Shuvendu Lahiri is a senior researcher in the Research in Software Engineering (RiSE) team at Microsoft Research, Redmond. He is interested in rigorous methods for improving the analysis of software. His current focus lies in using symbolic methods to perform differential static analysis and semantic methods to prioritize warnings from program verifiers. He has worked on logics and decision procedures, algorithms for dealing with heap-manipulating programs, abstraction techniques for hardware and software. Shuvendu holds a PhD from Carnegie Mellon University and a B.Tech. from IIT Kharagpur. His PhD dissertation on indexed predicate abstraction received the ACM SIGDA outstanding thesis award. More information can be found at http://research.microsoft.com/~shuvendu.

Karl Meinke

Brian Nielsen

Ina Schaefer

Since April 2012, Ina Schaefer is full professor and head of the Institute of Software Engineering and Automotive Informatics at Technische Universität Braunschweig, Germany. She received her PhD in 2008 at Technische Universität Kaiserslautern and was 2010-2011 Postdoc at Chalmers University, Gothenburg, Sweden. Her research interests include the modular modeling and implementation of variant-roch software systems, as well as efficient quality assurance procedures for software variants and versions. She initiated the workshop series " Formal Methods for Software Product Line Engineering " and is PC co-chair of FASE'15. She is co-author of over 70 technical publications in relevant journals and conferences.

Elaine Weyuker

Elaine Weyuker is a Visiting Professor Mälardalen University in Västerås, Sweden and an independent researcher and consultant. Previously, she worked as a Fellow and Distinguished Member of the Technical Staff at AT&T Labs, a Professor of Computer Science at the Courant Institute of Mathematical Sciences of New York University, a Lecturer at the City University of New York, a Systems Engineer at IBM, and a programmer at Texaco. Her research focuses on empirical software engineering and techniques to build highly reliable and dependable software. Much of her recent research involved designing techniques, building statistical models and developing a tool to automatically predict which files of large industrial software systems are most likely to contain bugs, for which she holds a patent.

She is a member of the U.S. National Academy of Engineering, an IEEE Fellow, an ACM Fellow and an AT&T Fellow. She is the author of 170 refereed publications as well as several books and book chapters. Among her awards are the 2012 US President's volunteer service award, the 2010 ACM President's Award, the 2008 Anita Borg Institute Technical Leadership Award for Outstanding Research and Technical Leadership, the 2007 ACM SIGSOFT Outstanding Research Award, the 2004 IEEE Computer Society Harlan D. Mills Award, the Rutgers University 50th Anniversary Outstanding Alumni Award, the 2001 YWCA Woman of Achievement Award, and the 2004 AT&T Chairman's Diversity Award. She served as the chair of the ACM Council on Women in Computing (ACM-W) from 2004 - 2012 and has been a member of the Coalition to Diversify Computing's Executive Committee for many years.


Registration

The registration deadline is April 15, 2015.


To apply to the summer school, please send an email to Veronica.Gaspes@hh.se with "Halmstad Summer School on Testing" in the title. If you have any dietary requirements, or would like to attend only certain days of the summer school, please specify in your email text.


The registration fee is 2000 SEK (approx. 225 EUR, ) and covers lunches, coffee breaks, the social event, and the study material but does not include the social dinner. The ticket to the social dinner costs 500 SEK and can be requested upon registration (please indicate in your email). The deadline for registration is April 15, 2015. After sending the registration email, you will receive a confirmation. Please proceed to payment after you receive the confirmation email.

If your registration is confirmed, please use the PayPal page (to be provided in March 2015).