WG211/M4Talks
Please add talk abstracts below:
Contents
- 1 Synthesis of Hybrid Automata from Temporal Logic Specifications with Applications in Computer Animation
- 2 Modular Checking of Generative Programs with C++ ``Concepts
- 3 The Converge programming language
- 4 Compile-Time Reflection: Program Augmentation with Patterns
- 5 Can We Teach Computers to Write Fast Libraries?
Synthesis of Hybrid Automata from Temporal Logic Specifications with Applications in Computer Animation
Tom Ellman
Department of Computer Science
Vassar College
A hybrid automaton is a framework for modeling a physical system with discrete and continuous behaviors. The behavior of a hybrid automaton can be specified in a timed hybrid linear temporal logic (THLTL), which includes the usual LTL modal operators along with facilities for expressing precise timing constraints. Hybrid automata can be simulated using numerical methods for solving differential equations, along with code for controlling transitions among behavioral modes. Hybrid automata have a wide variety of applications in problems of simulation and control, including one of my own interests: physics-based computer animation.
My research is developing a schema-based approach to the specification and synthesis of hybrid automata: A user develops a specification by selecting and instantiating one or more pre-defined THLTL schemata. Each schema is associated with a collection of rewrite rules implemented in Mathematica. The rewrite rules generate C++ code that runs in a numerical simulator architecture. I will illustrate my approach by showing how it synthesizes some non-trivial physics-based animation programs. I will also sketch work-in-progress applying the approach to problems of goal-directed character animation.
Modular Checking of Generative Programs with C++ ``Concepts
by Jeremy Siek
C++ templates are a popular tool for generative programming, using techniques such as the curiously-recurring template pattern, mixin layers, and template metaprogramming. The next revision of the C++ standard will include significant changes to the template mechanism, with the addition of the _concept_ feature for constraining template parameters. The main goal of the concept extension is to make generic algorithms easier to implement and use, but concepts can be used to provide modular type checking for any parameterized components, such as those used in generative programs. In this talk I give an introduction to the C++ _concept_ extension and show how concepts can be applied in the context of generative programming.
The Converge programming language
Laurence Tratt http://tratt.net/laurie/, King's College London
In this talk I will present the Converge programming language. Converge is a dynamically typed object orientated programming language with a compile-time meta-programming facility, and an extendable syntax which can be used to embed Domain Specific Languages (DSLs). In this talk I will present Converge's compile-time meta-programming facility and explain what extra features Converge has to facilitate the rapid construction of reliable, expressive DSLs.
Further information can be found at http://convergepl.org/.
Compile-Time Reflection: Program Augmentation with Patterns
Manuel Fahndrich http://research.microsoft.com/~maf, Microsoft Research Redmond
This talk describes our design for integrating a limited form of program transforms into our experimental programming language Sing#, an extension to C#. Transforms combine both a pattern for matching existing programming constructs above the code level as well as templates for generating new declarations and code in the matched contexts. The advantage of our design is that it makes program transformation accessible as a high-level visual programming language construct with the guarantee that well-formed transforms produce well-formed code when applied.
A paper on the subject is available at http://research.microsoft.com/~maf/papers/gpce06.pdf.
Can We Teach Computers to Write Fast Libraries?
[Markus P�schel]
Electrical and Computer Engineering
Carnegie Mellon University
The short life cycles and the increasing complexity and diversity of computing platforms pose a difficult challenge for developers of high performance numerical software. Optimal programs have to take full advantage of the platform’s parallelism, memory hierarchy, and available instruction set. Compiler optimization is inherently limited and produces suboptimal performance for all except very simply structured programs. To make things worse, optimal code is platform-dependent. As a consequence, developers are forced to permanently re-optimize their software and often even revert to assembly coding just as 50 years ago.
In this talk we present Spiral, a program generation system for linear transforms. Spiral generates highly optimized, platform-tuned implementations of transforms directly from a problem specification. For a user-specified transform, Spiral generates alternative algorithms, optimizes them, compiles them into programs, and “intelligently” searches for the best match to the computing platform. The main idea behind Spiral is a mathematical, declarative framework to represent algorithms and the use of rewriting systems to generate and optimize algorithms at a high level of abstraction. Optimization includes parallelization for vector architectures, shared and distributed memory platforms, GPUs, and even FPGAs. Experimental results show that the code generated by Spiral competes with, and sometimes outperforms, the best available human tuned library code. As for the question in the title: Spiral shows that, at least for well-understood problem domains, a positive answer may be in reach.
More information at the Spiral website: [http://www.spiral.net]