MoVES Events

PhD Defense on Reverse Engineering

We have the pleasure to invite you to the ph.d. defense of Joris Van Geet. The ph.d. itself is entitled “Reverse Engineering for Mainframe Enterprise Applications: Patterns and Experiences”; you’ll find an abstract below.

The ph.d. reports on 4 years worth of research concerning the application of state of the art reverse engineering techniques in the industrial context of banks and insurance companies. Not surprisingly these techniques have much to offer, yet their practical applications raises many issues seldom considered in research. In that sense, this ph.d. is a perfect example of our “industry as a lab” research strategy: we don’t only test techniques under artificial lab conditions, but also validate them in the context of real software companies. As promotor I am therefore quite honoured to invite you for this occasion.

Practical information

  • when ? Monday november, 8th 2010 at 17:00
  • where ? in the “promotiezaal van het klooster van de Grauwzusters” (stadscampus, building S); Lange Sint-Annastraat 7, 2000 Antwerpen
  • Route descriptions and parking suggestions can be found via:
  • registration ? send a simpele e-mail to Joris Van Geet


The ability to evolve and maintain software reliably is a major challenge for today’s organizations. Reverse engineering can support this challenge by recovering knowledge from existing systems. The reverse engineering community is a vibrant research community, which has resulted in many useful techniques and research prototypes to recover this knowledge, however, few of them have been exploited industrially in significant ways.

To investigate this lacking industrial adoption, this dissertation investigates the applicability of existing reverse engineering techniques in practice, more specifically, on mainframe systems in the financial services industry. We report our experience with applying software views, feature location and redocumentation; and we identify the main cause for the lacking adoption to be the mismatch between the characteristics of reverse engineering and the characteristics of the (development) processes in place in the organizations.

To resolve this situation, we recommend organizations to incorporate reverse engineering into their processes, urge researchers to apply their techniques not only in the lab but also in realistic circumstances, and provide practitioners with patterns that will allow them to apply reverse engineering in the mean time.

VAST 2011 - 1st Int’l Workshop on Variability-intensive Systems Testing, Validation & Verification


Driven by rising customer demands, continuously changing context conditions (such as legal or business settings), and the wish to leverage existing development assets, the capability of modern software systems to be configurable or reconfigurable is increasing. This leads to software systems which expose a high degree of variability. Over the past years development paradigms that enable engineering and maintaining such high-variability software systems thus have appeared; the most prominent examples being Software Product Line (SPL), Service-Oriented Architecture (SOA), and Dynamically Adaptive Systems (DAS). Due to the productivity gains those paradigms promise and due to the powerful ways of handling variability that they offer to application developers, they are gaining popularity in a world where tight schedules and ever changing business needs are the rule.

As for any development paradigm it is of paramount importance to understand how to perform effective and efficient validation and verification (V&V) This becomes is especially challenging for variability-intensive systems. In the case of SPL, one key reason the V&V task is a complex endeavor is because variability exponentially increases the number of tests and checks needed. Furthermore, as SPL V&V activities concern a set of products and/or reusable assets, adequate coverage criteria are needed to establish confidence in the quality of the V&V results. In the SOA case V&V faces a similar – if not worse – complexity problem. Due to the loose coupling and late binding of services, they can be composed to a potentially unbound number of different service-based systems, often not known when the individual service are created. In the DAS case, the situation is similar but special interest is devoted to the runtime issues of V&V.

Initial solutions to handle the high variability during validation and verification have been proposed by various communities, including SPL, SOA, and DAS, but also by more general communities, such as MDD and V&V. This situation makes it difficult to get a global view on key challenges, results and emerging ideas in this area. The main purpose of the VAST workshop is thus to gather researchers working on testing, verification & validation of software product lines, service-based systems and dynamically adaptive systems to discuss novel ideas and understand how they can learn from each other. Ultimately, those discussions could lead to a common research agenda for this discipline.


Variability is key enabler for most systems throughout their development and evolution. Indeed, customer demands and continuously changing contexts (environment, legal and business settings, technology etc.) ask for more adaptability in software engineering. This major trend impacts the whole engineering process, with key-emerging technologies such as SPL (Software Product Line), Service-Oriented Architecture (SOA), Dynamically Adaptive Systems or AOM (Aspect-Oriented Modelling). All these paradigms aim at providing solutions to introduce and manage variability at different lifecycle stages.

While variability is at the core of the Software Product Line (SPL) paradigm, it can also be enabled thanks to specific properties of the architecture on which software relies upon; For example, Service-Oriented Architecture (SOA) exploits loose coupling between services to facilitate the design and deployment of applications. Dynamically Adaptive Systems use reflection and runtime transformation to adapt to their environment.

A matter of fact, many technologies enable variability. As a consequence, combinatorial explosion due to variability is a common problem spanning over all these paradigms. Testing and verifying variability intensive systems is an issue that has been studied specifically. To date, some specific techniques have been developed (such as combinatorial interaction testing or modular checking) to contend with such explosion during the verification & validation process. However the field is still in its infancy. Even if some results have shown first promising outcomes in theory, their practical applicability has still to be demonstrated. The integration/combination of V&V techniques may be investigated to address the aforementioned validation challenge. Questions concerning the scalability, quality and usability of the results, integration during the development lifecycle still have to be answered. Furthermore, scattered across several communities, some general advances may be difficult to share and widespread.

The aims of this workshop are to provide a forum in which practitioners and researchers can share their ideas and results and to establish a common research agenda for testing, verification and validation of variability-intensive systems.


Contributions are expected in all areas of V&V applied to variability-intensive systems. Topics include but are not limited to:

  • Test Definition (during Domain Engineering / Application Engineering, Problem Space / Solution Space)
  • Test Generation and Test Selection
  • Test Oracles
  • Acceptance Criteria
  • Assessing Test Quality and Coverage
  • Variability Formalization for Model-checking and Verification
  • Variability Formalization for Testing and Validation
  • Combining Testing and Model-checking
  • Model-driven and Model-based based Testing
  • Variability Space Exploration Strategies: e.g. Incremental vs Global
  • Test Case Reuse
  • Testing Processes for Variability-intensive Systems
  • Testing @ Runtime (Online and “in-service” Testing)
  • Verification @ Runtime
  • Regression Testing and Verification
  • Model Checking for Variability
  • Scalability Issues
  • Compositional and Incremental Checking
  • Extra-functional Properties (security, performance)
  • Variability V&V for Specific Application Areas (dependability, resilience, etc.)

Submission, Evaluation and Publication

Papers can be submitted in the following categories:

  • Research/Industry papers: Research papers have to demonstrate some original ideas and emerging results/tool support. They will be evaluated on their technical soundness and how they advance of the current state of the art. Industry papers will typically describe the application of particular techniques on concrete variability-intensive systems. Industry papers will be evaluated regarding the relevance and quality of lessons learned. Not more than 8 pages.
  • Vision/Position papers: Position papers state the current state of the art and where the community should go. This is also the venue for early ideas that are not mature enough to be described in a research paper. Not more than 4 pages.
  • Demo papers: Demo papers describe a tool addressing V&V for variability-intensive systems. Each paper should present the features/limitation of the tool as well as a case study which will be demonstrated at the workshop in case of acceptance. Not more than 2 pages.

Paper submission is handled via easychair.

Papers should conform to the two-column IEEE conference publication format (the format of the main conference: ICST 2011).

Each paper will be reviewed by at least three PC Members. Accepted papers will be published by the IEEE Computer Society in the IEEE Digital Library.

Important Dates

  • Submission deadline for Research/Industry papers: Dec. 22, 2010
  • Submission deadline for Vision/Demo papers: Jan. 14, 2011
  • Notification of acceptance: Feb. 1, 2011
  • Submission deadline for camera-ready copies: Feb. 15, 2011

ASE 2010 - The 25th IEEE/ACM International Conference on Automated Software Engineering

The 25th IEEE/ACM International Conference on AUTOMATED SOFTWARE ENGINEERING, September 20-24, 2010, Antwerp, Belgium

The 25th Anniversary Edition of the IEEE/ACM International Conference on Automated Software Engineering will be held in Antwerp, Belgium, 20-24 September 2010.

Celebrating its 25th anniversary this year in the City of Diamonds, the ASE conference has become one of the world’s premier Software Engineering venues. Software engineering is concerned with the analysis, design, implementation, testing, and maintenance of large software systems. Automated software engineering focuses on how to automate or partially automate these tasks to achieve significant improvements in quality and productivity. ASE 2010 will present three keynote addresses by prominent speakers and a selection of 33 technical research papers, 31 short papers and 18 tool demonstrations about emerging topics in these domains. The event will also feature a doctoral symposium and a number of associated workshops and in-depth tutorials.


  • Jan Bosch
    “Towards Compositional Software Engineering”
    (September 22, 2010)
  • Cordell Green
    25th Anniversary Keynote: “The Actual Implementation Will Be Derived from the Formal Specification” – KBSA, 1983
    (September 23, 2010)
  • Axel van Lamsweerde
    “Model Engineering for Model-Driven Engineering”
    (September 24, 2010)


  • WASDeTT-3: 3rd International Workshop on Academic Software Development Tools
  • IWPSE-Evol 2010: 4th International Joint ERCIM/IWPSE Symposium on Software Evolution
  • FMICS : 15th International ERCIM Workshop on Formal Methods for Industrial Critical Systems
  • MOMPES 2010: 7th International Workshop on Model-Based Methodologies for Pervasive and Embedded Software
  • Moves-Verif
  • 3rd Workshop on Living With Inconsistency in Software Development
  • ACoTA: 1st International Workshop on Automated Configuration and Tailoring of Applications
  • TAV-WEB-10: Workshop on Testing, Analysis and Verification of Web Software

For further details see


  • T1: Infinite games and program synthesis from logical specifications
    by Christof Loeding
  • T2: Domain-Specific Modeling: Enabling Full Code Generation
    by Juha-Pekka Tolvanen
  • T3: The use of text retrieval techniques in software engineering
    by Andrian Marcus & Giuliano Antoniol
  • T4: Research Methods in Computer Science
    by Serge Demeyer
  • T5: Automated Component-Based Verification
    by Dimitra Giannakopoulou & Corina Pãsãreanu

For further details see


The conference takes place in Antwerp, one of the major cities in Belgium, both economically and culturally. The diamond-encrusted ASE 2010 25th anniversary logo is inspired by Antwerp’s world-famous diamond industry. The ASE 2010 conference will be hosted in “De Meerminne”, A brand new building of the Universiteit Antwerpen. De Meerminne is located right in the middle of the city centre, which makes it an extremely attractive location, within walking distance of shops, restaurants, pubs, historical buildings, and so much more!

For further details see


The registration is open since June 15 at A list of recommended hotels with preferential rates is available at Note that the early registration discount and preferred hotel rates are only available until 1st August 2010.


  • Charles Pecheur, Université catholique de Louvain, General Chair
  • Serge Demeyer, Universiteit Antwerpen, Local Chair
  • Kim Mens, Université catholique de Louvain, Treasurer
  • Elisabetta di Nitto, Politecnico di Milano, Program Chair
  • Jamie Andrews, University of Western Ontario, Program Chair

For further information, please visit the conference website at

The organization of ASE 2010 in Antwerp is an initiative of the MoVES consortium.

VARI-ARCH 2010 - 1st International Workshop on Variability in Software Product Line Architectures

Workshop Goal

The objective of this workshop is to bring together researchers from the software product line community and software architecture community to identify critical challenges and progress the state-of-the-art on variability in software product line architectures.

Introduction & Motivation

A software product line is a collection of similar software systems that are constructed from a shared set of assets in a prescribed way. Software product lines are valued by industry as they increase productivity and enable strategic, planned reuse of assets among multiple products.

The product line architecture is key to the success of a software product line. In contrast to single system architectures, a product line architecture is designed to underpin multiple systems. A product line architecture reifies the commonalities between the various products and also clearly delineates the variability that is allowed between products. As such a product line architecture is paramount to predictably achieve the qualities of the various products in a software product line.

Two prominent communities that have been studying product line architectures are the community on software product lines and the community on software architecture. Whereas both communities have been successful in addressing some of the challenges of product line architectures, persistent challenges remain, in particular concerning variability in product line architectures.

In the software product line community, it is generally acknowledged that variability of a product line should be captured explicitly. Many variability modeling techniques exist, but most of those techniques capture variability relative to generic/holistic concepts as “features” or “decisions” and do not specifically focus on variability relative to the software architecture of the product line.

In the software architecture community, it is generally acknowledged that a software architecture should be described using multiple views. Each view captures the architecture using (a) suitable model(s) from the perspective of a specific stakeholder and his/her concerns. In contrast to single system architectures, variability is a key quality of product line architectures. Although some work exists in this area, it is under-investigated how viewpoints/views can be used to support variability of product line architectures.

See VARI-ARCH 2010 website for details.

MDPLE 2010 - 2nd International Workshop on Model-Driven Product Line Engineering

Workshop Summary and Goals

The fundamental premise of product line engineering (PLE) is that the investment in a family of products pays off later by allowing systematic, efficient derivation of products. This should be automated as much as possible, which can be achieved via model-driven engineering (MDE) techniques.

Research in PLE and MDE has many intersections. PLE leverages MDE to specify variability, domain concepts, configurations and more. (Semi-) automated product derivation requires mappings between the models on different abstraction layers and model transformations to derive an implementation from a configuration.

In addition, latest research shows the increasing need for concepts to deal with very large and evolving systems. Product lines can no longer rely on an immutable scope but need to be considered as evolving systems which can span over organizational boundaries. Thus, there is a need to apply and investigate latest concepts from MDE like model-driven evolution and co-evolution, consistency management, multi-paradigm modelling, etc.

In this workshop we aim to bring together researchers and practitioners to foster the exchange of concepts and ideas between them to address these challenges.

Workshop Topics

We are interested in the application of concepts from MDE to the area of product line engineering, including (but not limited to):

  • Modelling of software product lines
  • Variability modelling
  • Automated and interactive product derivation
  • Aspect-oriented approaches
  • Multiple binding time and run-time variability
  • Automatic inference of variability constraints
  • Advanced approaches and process models
  • Evolution and change
  • Traceability and integrated handling of multiple models and artefacts
  • Product validation
  • Scalability and complexity

We explicitly encourage submission of case studies and experience reports from industry where such techniques have been applied in industrial practice and on a larger scale.

See MDPLE 2010 website for details.

Francqui Chair Theo D’Hondt - March/May 2010

The University of Namur cordially invites you to attend the series of lectures entitled: “Growing a Language from the Inside Out” by Prof. Theo D’Hondt (VUB), titular of the 2010 Francqui Chair, at the University of Namur (FUNDP), Faculty of Computer Science, Auditorium I2.


Date Time Title
Fri., March 19th 15h00 17h00 Inaugural Lecture: On the renewed need for language engineering. This lecture will be followed by a cocktail. Registration required.
Fri., March 26th 13h30 17h30 Interpreters and Virtual Machines
Fri., April 2nd 13h30 17h30 Continuations and Continuation Passing Style
Fri., April 23rd 13h30 17h30 Using primitive execution models
Fri., April 30th 13h30 17h30 Memory Management as a Crosscutting Concern
Fri., May 7th 13h30 17h30 Binding it all Together


The phrase “Growing a Language” was coined by Guy Steele in his widely recognised keynote talk at the 1998 oopsla conference. It refers to the need for programming languages that consist of a powerful and expressive core that is easily extended to satisfy specific needs. In this series of lectures we discuss a similar need at the level of the language processor itself. We need to bridge the gap between the abstract concerns addressed by the language and the features offered by the hardware platform, with all their qualities and limitations. Nevertheless, we want to underline the need for software reuse at this very technical level – a fact which is far too often forgotten.

The notion of Programming Language Engineering was introduced to describe this branch of computer science. It refers to the assembly and mastery of relevant methods and techniques from science and technology to facilitate the construction and application of programming language processors. It is one of the oldest disciplines in computer science and it has never been very far away from the core of research in our field.

The explosive growth in hardware performance, known as Moore’s law, was until very recently responsible for a false sense of security in the world of computing. In particular, many felt that we had reached a stable situation in the use of programming languages. Today we see that computer engineering has been forced to choose the path of replication rather than miniaturisation in order to follow the ever increasing demands for performance. This has led to a renewed interest in parallel computing and the programming language abstractions required by it. This evolution is in full swing in the field of high performance computing but may be expected to extend to the desktop in the very near future.

These lectures are inspired by two concerns. In the first place, they aspire to rekindle interest in programming language research results dating back to more than 20 years ago: and hence outside the time window accessible to young – and not so young – researchers. For instance notions of continuations are re-emerging, but are today far too poorly understood. An immersion in past knowledge rather than a re-invention of the wheel seems indicated. In the second place, these lectures are a critique of the outright extrapolation of current language technology to handle the many-core revolution.

Finally, we eat our own dog food: these lectures will refer to actual programming languages and their related software artefacts; and the end product will be a concrete language processor built according to the precepts advanced during the lectures.


Lectures will take place at:

Faculty of Computer Science
Auditorium I2 (1st floor)
Rue Grandgagnage 21
5000 Namur

Attendance is free but registration is required. To register, please send a message to Isabelle Daelman. Parking for visitors is available rue Henri Lemaître, but you must ask Mrs Daelman for a parking permit.

For any other question, please contact Prof. Patrick Heymans.

Symposium - Case Studies as Empirical Research Methods - April, 15th 2009

Software engineering research in general (and software evolution research in particular) must seek to validate its techniques in an realistic context. Indeed, “in vitro” research is necesary to understand where and why a given technique makes a difference. However, it must be followed by “in vivo” research to see whether a given technique delivers upon its promises under the harsh conditions of reality. This explains why Case studies are one of the dominant research methods in Software Engineering, as they provide a lightweight approach to “in vivo” research.

However, what precisely does it mean to do a case study ? When does a case becomes so simple that we speak of a toy-example ? How do we avoid that as a researcher to get too involved (and biased) ?

During this symposium we will try to (at least partially) address these questions. Two leading researchers (namely Prof. Arie Van Deursen from the Delft University of Technology - The Netherlands and Prof. Per Runeson from the Lund University - Sweden) will share their experience with the case study research method in two presentations.

The symposium will be held on Wednesday, April 15th 2009 between 13:30 and 15:30 in the University of Antwerp - Department of Mathematics and Computer Science. (The program, directions on how to get there and other practical information can be found at The symposium will be followed by the Ph.D. defense of Bart Van Rompaey concerning Developer testing as an asset during software evolution: a series of empirical studies.

Both the symposium and the ph.d. defense are open to everyone interested and free of charge. However participants should register by sending an e-mail to Prof. Serge Demeyer (

LATE 2009 - 02/03/2009

5th International Workshop on Linking Aspect Technology and Evolution
LATE'09 Charlottesville, Virginia, USA
March 2, 2009
MoVES Partners: VUB
The 5th International Workshop on
Linking Aspect Technology and Evolution
LATE'09 is organized in cooperation with MoVES.

Software evolution lies at the heart of the software development process, and is hindered by problems such as maintainability, evolvability, understandability, etc. Aspect-oriented software development (AOSD) is an emerging software development paradigm that tries to achieve better separation of concerns. It is often claimed that aspect-oriented design and implementation improves maintainability, evolvability and understandability of the software.
This workshop aims to investigate this claim and explore the relationship between software evolution and AOSD. In particular, the workshop’s objective is to study the impact of AOSD on software evolution on the one hand, and the impact of software evolution on AOSD on the other hand. The former subject could for example deal with diverse issues such as how using AOSD improves the quality of the software, and thus eases software evolution, or how existing applications can be evolved into AOSD applications. The latter subject is concerned with the way existing software evolution techniques (e.g., refactoring) are affected by AOSD, and how they should be extended in order to include AOSD concepts.

VaMoS 2008 - 28/01/2009

3rd International Workshop on Variability Modelling of Software-intensive Systems
VaMoS 09 Sevilla, Spain
January 28-30, 2009
Managing variability is a major concern in the development, maintenance and evolution of softwareintensive systems. To be managed effectively and efficiently, variability must be explicitly modelled. Numerous variability modelling techniques have been proposed both by academia and industry. The aim of the VaMoS workshop series is to bring together researchers from various areas of variability modelling to discuss advantages, drawbacks and complementarities of the various approaches and to present new results for variability modelling and management.

BENEVOL 2008 - 11 & 12/12/2008

7th BElgian-NEtherlands software eVOLution workshop
BENEVOL 2008 Eindhoven, The Netherlands
December 11 - 12, 2008
MoVES Partners: almost all
This year the 7th edition of the BElgian-NEtherlands software eVOLution workshop (BENEVOL 2008) will take place at Eindhoven, The Netherlands. The two-day workshop will be held on Thursday 11 and Friday 12 December 2008. The aim of the workshop is to bring researchers to identify and discuss important principles, problems, techniques and results related to software evolution research and practice. Special theme of BENEVOL 2008 is Ensuring software quality in evolution.

WCRE 2008 - 15/10/2008

15th Working Conference on Reverse Engineering
WCRE 2008 Antwerp, Belgium
October 15 - 18, 2008
WCRE 2008, the 15th Working Conference on Reverse Engineering will be organized in Antwerp in October 2008. WCRE is the premier research conference on the theory and practice of recovering information from existing software and systems.

CHAMDE 2008 - 30/09/2008

International Workshop on Challenges in Model Driven Software Engineering
CHAMDE'08 Workshop at MoDELS 2008 Toulouse, France
September 28, 2008
MoVES Partners: VUB, KUL, UMH
CHAMDE’08, the Workshop on Challenges in Model Driven Software Engineering will be organized at MoDELS in September 2008. The main objective of this workshop is to provide a forum to disseminate new and revolutionary ideas, to discuss future challenges, and to encourage new ways of thinking in the field of MDE.

MCCM 2008 - 30/09/2008

International Workshop on Model Co-Evolution and Consistency Management
MCCM'08 Workshop at MoDELS 2008 Toulouse, France
September 30, 2008
MoVES Partners: VUB, FUNDP
MCCM’08, the Workshop on Model Co-Evolution and Consistency Management will be organized at MoDELS in September 2008. The main objective of this workshop is to provide a forum for researchers and practitioners who work on innovative solutions to deal with model co-evolution and consistency management.

SVPP 2008 - 08/09/2008

Symposium on Software Variability from a Programmer's Perspective
SVPP'08 Brussels, Belgium
August 8-9, 2008
MoVES Partners: VUB
The goal of this two-day symposium is to promote discussion about proper programming language support required to deal with software variability. Rather than ad hoc implementations of infrastructure to cope with software variability, we search for solutions that provide built-in support by either extending existing programming languages with the language features required, or creating completely new domain-specific languages. Invited speakers are:
Jim Coplien, Robert Hirschfeld, Karl Lieberherr, and Oscar Nierstrasz

2008 ACM SIGSOFT Outstanding Research Award

2008 ACM SIGSOFT Outstanding Research Award
ACM SIGSOFT Prof. Axel van Lamsweerde (UCL) has been awarded the 2008 ACM SIGSOFT Outstanding Research Award. This award is presented to an individual who has made significant and lasting research contributions to the theory or practice of software engineering.

Software Engineering Seminar - 29/04/2008

Prof. Dr. Jean-Pierre Briot and Dr. Mark S. Miller
PROG-SSEL Vrije Universiteit Brussel, Campus Etterbeek, Brussels, Belgium
April 29, 2008 - 13h00
MoVES Partners: VUB
Agents and Components: Comparison and Perspectives
by Jean-Pierre Briot
(Laboratoire d’Informatique de Paris 6 (LIP6), Paris Universite Pierre et Marie Curie - CNRS)
Agents and components, although developed independently, are both concepts aimed at designing more composable and adaptable software. The first part of the talk will compare them along the history of programming abstractions and include other important steps such as, e.g., objects, actors and services. More precisely, we will consider a common referential with three dimensions: action selection flexibility, coupling flexibility and abstraction level. Then, if times allows, in a second part of the talk, we will point out a few directions for combining notions of agents and components. A first direction is in making components more autonomous and flexible, by importing some ideas from agents and multi-agent systems. A second and dual direction is in using components to design building blocks for constructing agents.

Tradeoffs in Retrofitting Security: An Experience Report
by Mark S. Miller
(Google Research)
In 1973, John Reynold’s and James Morris’ Gedanken Language retrofit object-capability security into an Algol-like base language. Today, there are active projects retrofitting Java, Javascript, Python, Mozart/Oz, OCaml, Perl, and Pict. These represent a variety of approaches, with different tradeoffs regarding legacy compatibility, safety, and expressivity. In this talk I propose a taxonomy of these approaches, and discuss some of the lessons learned to date. I will also demo CapDesk, a proof of concept of a virus-safe desktop, applying object-capability principles at the user interface level.

EVOL@Mons 2008 - 25/02/2008


EVOL@Mons - Research Seminar on Software Evolution.

In order to participate, read instructions on the following website

Talk by Prof. Hans Vangheluwe - 15/11/2007

Multi-Paradigm Modelling and the quest for tool support
2007_11_16_vangheluwe_hans.jpg Prof. Hans Vangheluwe
Modeling, Simulation & Design Lab
School of Computer Science
McGill University
Montréal, Québec, Canada H3A 2A7
Download the slides of the talk.


Models are invariably used in Engineering (for design) and Science (for analysis) to precisely describe structure as well as behaviour of systems. Models may have components described in different formalisms, and may span different levels of abstraction. In addition, models are frequently transformed into domains/formalisms where certain questions can be easily answered. We introduce the term “multi-paradigm modelling” to denote the interplay between multi-abstraction modelling, multi-formalism modelling and the modelling of model transformations.

The presentation will start with some ancedotal evidence of the need for multi-paradigm modelling. Subsequently, the foundations of multi-paradigm modelling will be presented. It will be shown how all aspects of multi-paradigm modelling can be explicitly (meta-)modeled enabling the efficient synthesis of (possibly domain-specific) multi-paradigm (visual) modelling environments.We have implemented our ideas in the tool AToM^3 (A Tool for Multi-formalism and Meta Modelling). AToM^3 will be introduced by means of a simple example. Finally, an overview will be given of current and future challenges of multi-paradigm modelling.

Short CV of Prof. Hans Vangheluwe
Hans Vangheluwe is an Associate Professor in the School of Computer Science at McGill University, Montréal, Canada. He holds a D.Sc. degree, as well as M.Sc. degrees in Computer Science, and Theoretical Physics, all from Ghent University in Belgium. He has been a Research Fellow at the Centre de Recherche Informatique de Montréal, Canada, the Concurrent Engineering Research Center, WVU, Morgantown, WV, USA, at the Delft University of Technology, The Netherlands, and at the Supercomputing and Education Research Center of the Indian Institute of Science (IISc), Bangalore, India. At McGill University, he teaches Modelling and Simulation, as well as Software Design. He also heads the Modelling and Simulation and Design (MSDL) research lab. He has been the Principal Investigator of a number of research projects focused on the development of a multi-formalism theory for Modelling and Simulation. Some of this work has led to the WEST++ tool, which was commercialised for use in the design and optimization of bioactivated sludge Waste Water Treatment Plants. His current interests are in domain-specific modelling and simulation. The MSDL’s tool AToM3 (A Tool for Multi-formalism and Meta-Modelling) developed in collaboration with Prof. Juan de Lara uses meta-modelling and graph grammars to specify and generate domain-specific environments. Recently, he has applied model-driven techniques in a variety of areas such as modern computer games, dependable and privacy-preserving systems(the Belgian electronic ID card), embedded systems, and to the design and synthesis of advanced user interfaces.



The 6th edition of the BElgian-NEtherlands software eVOLution workshop (BENEVOL 2007) will take place at the University of Namur, Belgium.

The aim of the workshop is to bring together researchers to identify and discuss important principles, problems, techniques and results related to software evolution research and practice.

The theme of this 2007 edition of BENEVOL is : “Evolving Software-Intensive Systems

The term is meant to invite contributions considering software as part of a broader system that defines its purpose and in which it operates. For instance, software is usually part of an organisational information system, or is embedded into physical devices (such as mobile phones). Thereby, software interacts with a complex, heterogeneous and changing environment. Evolving software must continue to serve the purpose imposed by its context and gracefully co-evolve with it.

Participation & Submission :

If you are interested in giving a talk at BENEVOL 2007, please send us an extended abstract of your talk to

Contributions are in the range of 2-4 pages and describe recent results or novel ideas in the research and practice of software evolution

Important dates :

Submission deadline November 1, 2007
Registration deadline December 7, 2007
BENEVOL’07 workshop December 13-14, 2007

7th International Conference on Aspect-Oriented Software Development

aosdlogo.jpg AOSD 2008 is the premier conference on software modularity, with an emphasis on novel notions of modularity that crosscut traditional abstraction boundaries. AOSD2008 is hosted by the Vrije Universiteit Brussel in Belgium during the period 31 March 2008 - 4 april 2008.

8th International Workshop on Object-Oriented Reengineering

8th International Workshop on Object-Oriented Reengineering (, collocated with ECOOP 2007, Berlin, Germany, 30 July 2007.

MoVES UA - KUL - TUD event

1 June 2007

Location: Universiteit Antwerpen

The event consisted of several presentations during which the Universiteit Antwerpen, the Katholieke Universiteit Leuven, and the Technische Universiteit Delft presented their MoVES related research tracks. The afternoon consisted of several presentations during which one of our european partners also took the opportunity to present his work. The presentations are available for download.

Please contact the presenters if you spot an opportunity for collaboration!


25 May 2007

Location: Facultés Universitaires Notre-Dame de la Paix Namur

The event consisted of several presentations during which the Facultés Universitaires Notre-Dame de la Paix Namur and the Université catholique de Louvain presented their MoVES related research tracks. In the morning we had eight presentations by members of the UCL. The afternoon was filled with six presentations by members of FUNDP (PReCISE). The event was ended with a cocktail and poster session during which we had a great opportunity to discuss possible collaborations. A booklet of the PReCISE group is available in which you can find more info about the work of its researchers. The presentations are also available for download.

Please contact the presenters if you spot an opportunity for collaboration!

  1. An Overview of PReCISE - Prof. Dr. Jean-Luc Hainaut (FUNDP)

MoVES ULB - ULg event

20 April 2007

Location: Université Libre de Bruxelles

The event consisted of several presentations during which the Université Libre de Bruxelles and the Université de Liège presented their MoVES related research tracks. The general setup of the meeting was similar to the kickoff event we had in march. The event was ended with a reception during which the members of the different partners had the opportunity to discuss the presentations.

Please contact the presenters if you spot an opportunity for collaboration!

The presentations are available for download:

MoVES Kickoff event

2 March 2007

Location: Vrije Universiteit Brussel

The event consisted of several presentations during which the Programming Technology Lab and the Systems and Software Engineering Lab of the Vrije Universiteit Brussel were presented. The presentations covered a number of selected research topics which are possible candidates for setting up a collaboration. Afterwards there was an open poster session during which the members of both research groups presented their work. To get an idea of the work of all members you can download the research descriptions booklet which was distributed during the event. The presentations are also available for download.

  1. Introduction to IAP - Veronique Feys (Belgian Science Policy Office)

Please contact the members of SSEL and PROG if you spot an opportunity for collaboration! Here are some photos taken at the event 1, 2, 3, 4, 5, 6)

Ambient CALA Seminar

13-14 February 2007 Location: Lille, France

The Ambient CALA Seminar united researchers from the Université des Sciences et Technologies de Lille - LIFL and the Vrije Universiteit Brussel who are working in the domain of ubiquitous computing and ambient intelligence.

The event has its own website:

Junior CALA Seminar

19-20 February 2007 Location: Vrije Universiteit Brussel

The Junior CALA Seminar is aimed at bringing together junior researchers of the Université des Sciences et Technologies de Lille - LIFL and the Vrije Universiteit Brussel. The research topics presented fit within the scope of the MoVES network.

The event has its own website:


International Workshop on Advanced Software Development Tools and Techniques.

Will be collocated with ECOOP 2008.


Third International ERCIM Symposium on Software Evolution.

Collocated with ICSM 2007.

VAMOS 2008

Second International Workshop on Variability Modelling of Software-intensive Systems.

ASE 2010

International Conference on Automated Software Engineering.

ASE 2010 will be organized in Belgium.

Talks about Software Evolution organized by the UCL (13/3/2008)

Tudor Girba University of Berne (Switzerland)

12h00 - 13h00

Hismo: modeling history to understand software evolution.

Abstract: Over the past three decades, more and more research has been spent on understanding software evolution. However, the approaches developed so far rely on ad-hoc models, or on too specific meta-models, and thus, it is difficult to reuse or compare their results. We argue for the need of an explicit and generic meta-model that recognizes evolution as an explicit phenomenon and models it as a first class entity. Our solution is to encapsulate the evolution in the explicit notion of history as a sequence of versions, and to build a meta-model around these notions called Hismo. To show the usefulness of our meta-model we exercise its different characteristics by building several reverse engineering applications. Bio: Tudor Girba attained the PhD degree in 2005 at the University of Berne, Switzerland, and since then he is working as senior researcher at the same university. His interests lie in the area of software engineering with focus on reengineering. He is one of the main architects and developers of Moose analysis platform, he developed the Hismo software evolution meta-model, he co-authored the Mondrian interactive visualization scripting engine, and he participated in the development of several other reverse engineering tools and models. He is the president of the Moose Association and he is member of the Executive Board of CHOOSE. He also offers consulting services in the area of reengineering and quality assurance.

Tudor Girba University of Berne (Switzerland)

14h00 - 16h00

Understanding Software with Pictures

Abstract: Understanding software systems is hampered by their sheer size and complexity. Software visualization encodes the data found in these systems into pictures and enables the human eye to interpret it. In this lecture we present the concepts of software visualization and we show several examples of how visualizations can help in understanding software systems. We will also complement the theory with practical demos using the Moose analysis platform ( Bio: Tudor Girba attained the PhD degree in 2005 at the University of Berne, Switzerland, and since then he is working as senior researcher at the same university. His interests lie in the area of software engineering with focus on reengineering. He is one of the main architects and developers of Moose analysis platform, he developed the Hismo software evolution meta-model, he co-authored the Mondrian interactive visualization scripting engine, and he participated in the development of several other reverse engineering tools and models. He is the president of the Moose Association and he is member of the Executive Board of CHOOSE. He also offers consulting services in the area of reengineering and quality assurance.

info/events.txt · Last modified: 2010/10/22 18:09 by serge.demeyer