Alster

Petri Net Talks

Using Colored Petri Nets to Construct Coalescent Hidden Markov Models: Automatic Translation from Demographic Specifications to Efficient Inference Methods

Thomas Mailund, Anders Halager and Michael Westergaard

Biotechnological improvements over the last decade has made it economically and technologically feasible to collect large DNA sequence data from many closely related species. This enables us to study the detailed evolutionary history of recent speciation and demographics. Sophisticated statistical methods are needed, however, to extract the information that DNA sequences hold, and a limiting factor in this is dealing with the large state space that the ancestry of large DNA sequences spans. Recently a new analysis method, CoalHMMs, has been developed, that makes it computationally feasible to scan full genome sequences -- the complete genetic information of a species -- and extract genetic histories from this. Applying this methodology, however, requires that the full state space of ancestral histories can be constructed. This is not feasible to do manually, but by applying formal methods such as Petri nets it is possible to build sophisticated evolutionary histories and automatically derive the analysis models needed. In this paper we describe how to use colored stochastic Petri nets to build CoalHMMs for complex demographic scenarios.

back
-----------------------------------------------------------

Stochastic Modeling and Analysis using QPME: Queueing Petri Net Modeling Environment v2.0

Simon Spinner, Samuel Kounev and Philipp Meier

Queueing Petri nets are a powerful formalism that can be exploited for modeling distributed systems and analyzing their performance and scalability. By combining the modeling power and expressiveness of queueing networks and stochastic Petri nets, queueing Petri nets provide a number of advantages. In this paper, we present our tool QPME (Queueing Petri net Modeling Environment) for modeling and analysis using queueing Petri nets. QPME provides an Eclipse-based editor for building queueing Petri net models and a powerful simulation engine for analyzing these models. The development of the tool started in 2003 and since then the tool has been distributed to more than 120 organizations worldwide.

back
-----------------------------------------------------------

Snoopy - a unifying Petri net tool

Monika Heiner, Mostafa Herajy, Fei Liu, Christian Rohr and Martin Schwarick

The tool Snoopy provides a unifying Petri net framework which has particularly many application scenarios in systems and synthetic biology. The framework consists of two levels: uncoloured and coloured. Each level comprises a family of related Petri net classes, sharing structure, but being specialized by their kinetic information.
Petri nets of all net classes within one level can be converted into each other, while changing the level involves unfolding or user-guided folding. Models can be hierarchically structured, allowing for the mastering of larger networks. Snoopy supports the simultaneous use of various Petri net classes; the graphical user interface adapts dynamically to the active one. Built-in animation and simulation (depending on the net class) are complemented by export to various analysis tools.
Snoopy facilitates the extension by new Petri net classes thanks to its generic design.

back
-----------------------------------------------------------

SMT-based Discovery Algorithm for C-nets

Marc Sole and Josep Carmona

Recently, Causal nets have been proposed as a suitable model for process discovery, due to their declarative semantics and the great expressiveness they possess. In this paper we propose the first algorithm to discover a causal net from a set of traces. It is based on encoding the problem as a Satisfiability Modulo Theories (SMT) formula, and uses a binary search strategy to optimize the derived model. The method has been implemented in a prototype tool that interacts with an SMT solver. The experimental results obtained witness the capability of the approach to discover complex behavior in limited time.

back
-----------------------------------------------------------

Decomposing Process Mining Problems Using Passages

Wil Van Der Aalst

Process discovery-discovering a process model from example behavior recorded in an event log-is one of the most challenging tasks in process mining. Discovery approaches need to deal with competing quality criteria such as fitness, simplicity, precision, and generalization. Moreover, event logs may contain low frequent behavior and tend to be far from complete (i.e., typically only a fraction of the possible behavior is recorded). At the same time, models need to have formal semantics in order to reason about their quality. These complications explain why dozens of process discovery approaches have been proposed in recent years. Most of these approaches are time-consuming and/or produce poor quality models. In fact, simply checking the quality of a model is already computationally challenging.

This paper shows that process mining problems can be decomposed into a set of smaller problems after determining the so-called causal structure. Given a causal structure, we partition the activities over a collection of passages. Conformance checking and discovery can be done per passage. The decomposition of the process mining problems has two advantages. First of all, the problem can be distributed over a network of computers. Second, due to the exponential nature of most process mining algorithms, decomposition can significantly reduce computation time (even on a single computer). As a result, conformance checking and process discovery can be done much more efficiently.

back
-----------------------------------------------------------

Complexity of the Soundness Problem of Bounded Workflow Nets

Guanjun Liu, Jun Sun, Yang Liu and Jinsong Dong

Classical workflow nets (WF-nets for short) are an important class of Petri nets that are widely used to model and analyze workflow systems. Soundness is a crucial property that guarantees these systems are deadlock-free and bounded. Aalst et al. proved that the soundness problem is decidable, and proposed (but not proved) that the soundness problem is EXPSPACE-hard. In this paper, we show that the satisfiability problem of Boolean expression is polynomial time reducible to the liveness problem of bounded WF-nets, and soundness and liveness are equivalent for bounded WF-nets. As a result, the soundness problem of bounded WF-nets is co-NP-hard. Workflow nets with reset arcs (reWF-nets for short) are an extension to WF-nets, which enhance the expressiveness of WF-nets. Aalst et al. proved that the soundness problem of reWF-nets is undecidable. In this paper, we show that for bounded reWF-nets, the soundness problem is decidable and equivalent to the liveness problem. Furthermore, a bounded reWF-net can be constructed in polynomial time for each linear bounded automata (LBA for short) with an input string, and we prove that the LBA accepts the input string if and only if the constructed reWF-net is live. As a result, the soundness problem of bounded reWF-nets is PSPACE-hard.

back
-----------------------------------------------------------

Cost Soundness for Priced Resource-Constrained Workflow nets

Mari­a Martos-Salgado and Fernando Rosa-Velardo

We extend workflow Petri nets (wf-nets) with discrete prices, by associating a price to the execution of a transition and, more importantly, to the storage of tokens. We first define the soundness problem for priced wf-nets, that of deciding whether the workflow can always terminate properly, where in the priced setting ``properly'' also means that the execution does not cost more than a given threshold. Then, we study soundness of resource-constrained workflow nets (rcwf-nets), an extension of wf-nets for the modelling of concurrent executions of a workflow, sharing some global resources. We develop a framework in which to study soundness for priced rcwf-nets, that is parametric on the cost model. Then, that framework is instantiated, obtaining the cases in which the sum, the maximum, the average and the discounted sum of the prices of each all instances are considered. We study the relations between these properties, together with their decidability.

back
-----------------------------------------------------------

On the alpha-Reconstructibility of Workflow Nets

Eric Badouel

The process mining algorithm alpha was introduced by van der Aalst et al. for the discovery of workflow nets from event logs. This algorithm was presented in the context of structured workflow nets even though it was recognized that more wokflow nets should be reconstructible. In this paper we assess alpha algorithm and provide a more precise description of the class of workflow nets reconstructible by alpha.

back
-----------------------------------------------------------

On Profiles and Footprints - Relational Semantics for Petri Nets

Matthias Weidlich and Jan Martijn van der Werf

Petri net systems have been successfully applied for modelling business processes and analysing their behavioural properties. In this domain, analysis techniques that are grounded on behavioural relations defined between pairs of transitions emerged recently. However, different use cases motivated different definitions of behavioural relation sets. This paper focusses on two prominent examples, namely behavioural profiles and behavioural footprints. We show that both represent different ends of a spectrum of relation sets for Petri net systems, each inducing a different equivalence class. As such, we provide a generalisation of the known relation sets. We illustrate that different relation sets complement each other for general systems, but form an abstraction hierarchy for distinguished net classes. For these net classes, namely S-WF-systems and sound free-choice WF-systems, we also prove a close relation between the structure and the relational semantics. Finally, we discuss implications of our results for the field of business process modelling and analysis.

back
-----------------------------------------------------------

Data and Abstraction for Scenario-Based Modeling with Petri Nets

Dirk Fahland and Robert Prüfer

Scenario-based modeling is an approach to describe behaviors of a distributed system in terms of partial runs, called scenarios. Deriving an operational system from a set of scenarios is the main challenge that is typically addressed by either synthesizing system components or by providing operational semantics. Over the last years, several existing scenario-based techniques have been adopted to Petri nets. Their adaptation allows for verifying scenario-based models and for synthesizing individual components from scenarios within one formal technique, by building on Petri net theory.

However, these models face two limitations: a system modeler (1) cannot abstract from concrete behavior, and (2) cannot explicitly describe data in scenarios. This paper lifts these limitations for scenarios in the style of Live Sequence Charts (LSCs). We extend an existing model for scenarios, that features Petri net-based semantics, verification and synthesis techniques, and close the gap between LSCs and Petri nets further.

back
-----------------------------------------------------------

Maximal Confluent Processes

Xu Wang

In process semantics of Petri Net, a non-sequential process is a concurrent run of the system represented in a partial order-like structure. For transition systems we can define a similar notion of concurrent run by utilising the idea of confluence. Basically a confluent process is an acyclic confluent transition system that is a partial unfolding of the original system. Given a non-confluent transition system G, how to find maximal confluent processes of G is a theoretical problem having many practical applications.

In this paper we propose an unfolding procedure for extracting maximal confluent processes from transition systems. The key technique we utilise in the procedure is the construction of granular configuration structures (i.e. a form of event structures) based on diamond-structure information inside transition systems..

back
-----------------------------------------------------------

Old and New Algorithms for Minimal Coverability sets

Antti Valmari and Henri Hansen

Many algorithms for computing minimal coverability sets for Petri nets prune futures. That is, if a new marking strictly covers an old one, then not just the old marking but also some subset of its subsequent markings is discarded from search.
In this publication, a simpler algorithm that lacks future pruning is presented and proven correct. Then its performance is compared with future pruning.
It is demonstrated, using examples, that neither approach is systematically better than the other. However, the simple algorithm has some attractive features. It never needs to re-construct pruned parts of the coverability set. If the coverability set is constructed in depth-first or most tokens first order, then an effect that gives most of the advantage of future pruning occurs automatically. Some generally practical aspects of coverability set construction are also discussed, and an idea called history merging is presented.

back
-----------------------------------------------------------

Stubborn Sets for Simple Linear Time Properties

Andreas Lehmann, Niels Lohmann and Karsten Wolf

We call a linear time property simple if counterexamples are accepted by a Buchi automaton that has only singleton strongly connected components. This class contains interesting properties such as LTL formulas G(phi implies F psi) or phi U psi which have not yet received support beyond general LTL preserving approaches.

We contribute a stubborn set approach to simple properties with the following ingredients. First, we decompose the verification problem into finitely many simpler problems that can be independently executed. Second, we propose a stubborn set method for the resulting problems that does neither require cycle detection, nor stuttering invariance, nor existence of transitions that are invisible to ALL atomic propositions. This means that our approach is applicable in cases where traditional approaches fail. Third, we reduce stubborn set calculation to an integer linear programming problem. The third contribution can be applied to many existing stubborn set methods as well.

back
-----------------------------------------------------------

Hybrid LTL Model Checking with the Sweep-Line Method

Sami Evangelista and Lars Kristensen

The sweep-line method allows explicit state model checkers to delete states on-the-fly thereby lowering the - sometimes prohibitive - memory requirements of the verification procedure. It relies on a priority queue based search algorithm that makes it hard to use in the context of LTL model checking that is best performed using depth-first search algorithms. This paper proposes and experimentally evaluate an algorithm for LTL model checking that is compatible with the sweep-line method.

back
-----------------------------------------------------------

Safety Slicing Petri Nets

Astrid Rakow

We define a safety slice as a subnet of a marked Petri net S that approximates S's temporal behaviour with respect to a set of interesting places Crit. This safety slice can be used to verifiy and falsify stutter-invariant linear-time safety properties when Crit is the set of places referred to by the safety property. By construction it is guaranteed that the safety slice's state space is at most as big as that of the original net. Results on a benchmark set demonstrate effective reductions on several net instances. Therefore safety slicing as a net preprocessing step may achieve an acceleration for model checking linear-time safety properties.

back
-----------------------------------------------------------

Capacity Planning for Vertical Search Engines: An approach based on Coloured Petri Nets

Veronica Gil Costa, Jair Lobos, Alonso Inostrosa-Psijas and Mauricio Marin

A model based on a Coloured Petri Net design is proposed for evaluating relevant performance metrics for vertical search engines. The model has a modular design that enables accommodation of alternative/additional search engine components. A performance evaluation study is presented to illustrate the use of the model and it shows that the proposed model is suitable for rapid exploration of different scenarios and determination of feasible search engine configurations.

back
-----------------------------------------------------------

An Infrastructure for Cost-Effective Testing of Operational Support Algorithms Based on Colored Petri Nets

Joyce Nakatumba, Michael Westergaard and Wil Van Der Aalst

Operational support is a specific type of process mining that assists users while process instances are being executed. Examples are predicting the remaining processing time of a running insurance claim and recommending the action that minimizes the treatment costs of a particular patient. Whereas it is easy to evaluate prediction techniques using cross validation, the evaluation of recommendation techniques is challenging as the recommender influences the execution of the process. It is impossible to simply use historic event data. Therefore, we present an approach where we use a colored Petri net model of user behavior to drive a real workflow system and real implementations of operational support, thereby providing a way of evaluating algorithms for operational support (including recommendations). We have implemented our approach using Access/CPN 2.0. In this paper, we evaluate some algorithms for operational support using different user models.

back
-----------------------------------------------------------

CPN Assistant II: A Tool for Management of Networked Simulations

Stefan Korecko, Jan Marcincin and Viliam Slodicak

The paper describes CPN Assistant II a simulation management tool for CPN Tools software. CPN Assistant II cooperates with CPN Tools 3.0 and allows to prepare, run and manage multiple simulation jobs in a network environment. For each simulation job a net, a number of simulation runs and transition firings per run and a selection of monitors to be processed can be specified. Data collected can be post-processed by means of user-defined plug-ins.

back
-----------------------------------------------------------

Designing Weakly Terminating ROS Systems

Debjyoti Bera, Kees Van Hee and Jan Martijn Van Der Werf

The Robot Operating System (ROS) is a popular software framework to develop and execute software for robot systems. ROS supports component-based development and provides a communication layer for easy integration. It supports three interaction patterns that are essential for control systems: the publish-subscribe pattern, the remote procedure call pattern and the goal-feedback-result pattern. In this paper we apply Petri nets to develop a structured design method for ROS systems, such that the weak termination property is guaranteed. The method is based on stepwise refinement using three interaction patterns and components modeled as state machines. The method is illustrated with a case study of robot ROSE.

back
-----------------------------------------------------------

Nets-within-nets to model innovative space system architectures

Frederic Cristini and Catherine Tessier

This paper focuses on a nets-within-nets approach to model, simulate and evaluate innovative space system architectures. Indeed, as military Earth observation space systems are more and more subjected to a wide spectrum of emerging space threats and attacks, it is necessary to devise new architectures and to assess their performance and their threat-tolerance. We will consider networked constellations of autonomous microsatellites - or Autonomous Networked Constellations (ANCs) - in which several heterogeneous interacting entities (satellites and ground stations), ruled by the space dynamics laws, rely on resources (functions or sub-systems) to achieve the overall Earth observation mission. Petri nets are well adapted for modelling ANCs as they feature event-triggered state changes and concurrent communication accesses. Moreover nets-within-nets allow hierarchical state propagation to be represented. The ANC model is based on Renew 2.2 which we use to deal with top-down, bottom-up and horizontal synchronizations so as to represent state propagations from an entity to its nested resources and vice-versa, and from an entity to another one. A model and simulation of a simple ANC subjected to threats are given and discussed.

back
-----------------------------------------------------------

P- and T-Systems in the Nets-within-Nets-Formalism

Frank Heitmann and Michael Köhler-Bußmeier

We are concerned with (strongly deterministic) generalised state machines (GSMs), a restricted formalism of the nets-within-nets-family, and further restrict the involved nets to P- and T-nets. While GSMs with these restrictions are likely to be of little use in modelling applications, understanding them better might help in future attempts to analyse more sophisticated formalisms.

We show that, given a strongly deterministic GSM where the system net and all object nets are T-nets, it is PSpace-complete to decide the reachability of a given marking. In past work we have already shown that the same restriction to P-nets remains solvable in polynomial time. We discuss this work in the context given here. At last we give some initial results concerning other combinations of restricting the system and/or the object nets to P- and/or T-nets. Throughout we also discuss the effect of dropping the restriction to strongly deterministic GSMs.

back
-----------------------------------------------------------

ACSD Talks

An Algorithmic Framework for Coverability in Well-Structured Systems

Tim Strazny and Roland Meyer

We generalize the backward algorithm for coverability in WSTS to an algorithmic framework. The idea is to consider the predecessor computation, the witness function, and the processing order as parameters. On the theoretical side, we prove that every instantiation of the functions (that satisfies some requirements) still yields a decision procedure for coverability. On the practical side, we show that known optimizations like partial order reduction and pruning can be formulated in our framework. We also give a novel acceleration based instantiation. For well-known classes of WSTS (PN, PN+Transfer, LCS), we optimize the selection function inspired by $A^*$. We implemented our instantiations and comment on the data structures. Extensive experiments show that the new algorithms can compete with a state-of-the-art tool: MIST2.

back
-----------------------------------------------------------

Hasse Diagrams of Combined Traces

Lukasz Mikulski and Maciej Koutny

One of the standard ways to represent concurrent behaviours is to use concepts originating from language theory, such as traces and comtraces. Traces can express notions such as concurrency and causality, whereas comtraces can also capture weak causality and simultaneity.
This paper is concerned with the development of efficient data structures and algorithms for manipulating comtraces. We introduce Hasse diagrams for comtraces which are a generalisation of Hasse diagrams defined for partial orders and traces, and develop an efficient algorithm for deriving them from language theoretic representations of comtraces. We also explain how the new representation of comtraces can be used to implement efficiently some basic operations on comtraces.

back
-----------------------------------------------------------

Algebra of Parametrised Graphs

Andrey Mokhov, Victor Khomenko, Arseniy Alekseyev and Alex Yakovlev

One of the difficulties in designing modern hardware systems is the necessity to comprehend and to deal with a very large number of system configurations, operational modes, and behavioural scenarios. It is often infeasible to consider and specify each individual mode explicitly, and one needs methodologies and tools to exploit similarities between the individual modes and work with groups of modes rather than individual ones. The modes and groups of modes have to be managed in a compositional way: the specification of the system should be composed from specifications of its blocks. This includes both structural and behavioural composition. Furthermore, one should be able to transform and optimise the specifications in a fully formal and natural way.

In this paper we propose a new formalism, called Parametrised Graphs. It extends the existing Conditional Partial Order Graphs (CPOGs) formalism in several ways. First, it deals with general graphs rather than just partial orders. Moreover, it is fully compositional. To achieve this we introduce an algebra of Parametrised Graphs by specifying the equivalence relation by a set of axioms, which is proved to be sound, minimal and complete. This allows one to manipulate the specifications as algebraic expressions using the rules of this algebra. We demonstrate the usefulness of the developed formalism on two case studies coming from the area of microelectronics design.

back
-----------------------------------------------------------

Reactive and Proactive Diagnosis of Distributed Systems using Net Unfoldings

Javier Esparza and Christian Kern

We present a diagnosis algorithm for distributed systems modeled as products of transition systems, a model very close to Petri nets. Following the seminal work of Benveniste, Fabre, Haar, and Jard, our algorithm palliates the state-explosion problem by means of the unfolding technique. Given an observation (partial information about a firing sequence), the algorithm constructs a prefix of the unfolding, a compact representation of the executions of the system compatible with the observation. Since the computations of this prefix is algorithmically involved, we define an overapproximation that trades precision for speed. We report on an implementation that constructs the prefix when the sequence of observations is known (reactive), but also present an online approach, where the diagnosis proactively looks into the future and compare these two approaches. Moreover, we present SAT solving methods for analyzing the explanation.

back
-----------------------------------------------------------

Translating Structural Process Properties to Petri Net Markings

Sven Linker

We introduce a spatio-temporal logic PSTL defined on Pi-Calculus processes. This logic is especially suited to formulate properties in relation to the structural semantics of the Pi-Calculus due to Meyer, a representation of processes as Petri nets. To allow for the use of well-researched verification techniques, we present a translation of a subset of PSTL to LTL on Petri nets.
We further prove soundness of our translation.

back
-----------------------------------------------------------

On Dual-Rail Control Logic for Enhanced Circuit Robustness

Andrey Mokhov, Victor Khomenko, Danil Sokolov and Alex Yakovlev

Ultra low-power design and energy harvesting applications require digital systems to operate under extremely low voltages approaching the point of balance between dynamic and static power consumption which is attained in the sub-threshold operation mode. Delay variations are extremely large in this mode, which calls for the use of asynchronous circuits that are speed-independent or quasi-delay-insensitive. However, even these classes of asynchronous logic become vulnerable because certain timing assumptions commonly accepted under normal operating conditions are no longer valid. In particular, the delay of inverters, often used as the so-called input ‘bubbles’, can no longer be neglected and they have to be either removed or properly acknowledged to ensure speed-independence.

This paper presents an automated approach to synthesis of robust controllers for sub-threshold digital systems based on dual-rail implementation of control logic which eliminates inverters completely. This and other important properties are analysed and compared to the standard single-rail solutions. Dual-rail controllers are shown not to have significant overheads in terms of area and power consumption and are even faster in some cases due to the elimination of inverters from critical paths. The presented automated synthesis techniques are very efficient and can be applied to very large controllers as demonstrated in benchmarks.

back
-----------------------------------------------------------

Large-Scale On-Chip Dynamic Programming Network Inferences using Moderated Inter-Core Communication

Andrew Mundy, Terrence Mak, Alex Yakovlev, Simon Davidson and Steve Furber

The analysis of large scale, complex networks using dynamic programming is of great use in many scientific and engineering disciplines. Current applications often require the analysis of scale-free networks with many millions of nodes and edges; presenting a huge computational challenge. Employing a distributed networks-on-chip infrastructure presents a unique opportunity of delivering power efficient and massive parallel accelerations. However, bursting and asymmetric communications across cores could create instant network saturation and lead to packet loss and performance degradation. In this paper, we present a moderated communication methodology that enables a balanced channel usage and network topological adaptation for improved performance. A novel analytical communication model for NoC is developed and leads to a theoretical bound of the on-chip communication cost estimate. Performances of the many-core computation and the proposed methods are rigorously evaluated using the real 18-core SpiNNaker chip. We demonstrate a 10x speed-up in analysis convergence and a 42% reduction in instantaneous Packet Injection Rate based on benchmark networks.

back
-----------------------------------------------------------

A Robust Asynchronous Interfacing Scheme with Four-Phase Dual-Rail Coding

Jakob Lechner, Martin Lampacher and Thomas Polzer

This paper proposes new robust asynchronous interfaces for GALS-systems. A combination of delay-insensitive and error detecting/correcting codes is used to achieve two types of robustness: variation-tolerance and fault-tolerance. Concerning the delay-insensitive code this paper targets the well-known 4-phase dual rail code, frequently used in asynchronous circuit design. In order to enable an optimal choice of the used error detecting/correcting code, a precise fault model and a general classification of possible interconnect architectures is presented. The goal is to tolerate single-bit errors with maximum coding efficiency, i.e., with minimal overheads for interconnect resources. This is accomplished by fully utilizing the information redundancy provided by the combination of the delay-insensitive code and an appropriate error detecting/correcting code.
Metastable upsets, however, cannot be handled with error correcting codes alone. Faults can occur at arbitrary times and thus compromise system timing. Even though metastability cannot be avoided, a metastability-tolerant implementation is presented, which waits for a metastable upset to resolve before processing a new data word. This guarantees correct data transmission regardless of the timing of erroneous inputs.

back
-----------------------------------------------------------

All Linear-Time Congruences for Finite LTSs and Familiar Operators

Antti Valmari

In process-algebraic verification, one can use many different semantic congruences. The weakest congruence that preserves the properties of interest is optimal, because it leaves most room for reducing the labelled transition system (LTS) that represents the behaviour of the system to be verified. Weakest congruences for some properties have been known. However, there has been no systematic treatment. This publication finds, for finite LTSs, *all* stuttering-insensitive linear-time congruences with respect to action prefix, hiding, relational renaming, and parallel composition. There are 20 of them. They are built from the alphabet, traces, two kinds of divergence traces, and five kinds of failures. Because of lack of space, the publication concentrates on the hardest and most novel part of the result, that is, proving the absence of more congruences.

back
-----------------------------------------------------------

Sequence Chart Studio

Martin Bezdeka, Ondrej Bouda, Lubos Korenciak, Matus Madzin and Vojtech Rehak

The Sequence Chart Studio (SCStudio) is a user-friendly drawing and verification tool for Message Sequence Charts (MSC). SCStudio supports several checkers that are able to verify properties such as realizability, time consistency, equivalence checking between two MSC diagrams, etc. Some of them are well known, while others are new or non-trivial extensions of existing ones. The graphical front-end is implemented as a Microsoft Visio add-on, whereas the checkers are platform independent. SCStudio is an open source project that provides an open interface for additional modules.

back
-----------------------------------------------------------

PeabraiN: A PIPE Extension for Performance Estimation and Resource Optimisation

Ricardo J. Rodriguez, Jorge Julvez and Jose Merseguer

Many discrete systems with shared resources from different artificial domains (such as manufacturing, logistics or web services) can be modelled in terms of timed Petri nets. Two studies that may result of interest when dealing with such a systems are the performance evaluation (or completed jobs per unit of time) and the resource optimisation. Exact performance evaluation, however, may become unachievable due to the necessity of an exhaustive exploration of the state-space. In this context, a solution can be to estimate the performance by computing bounds. Resource optimisation leverages a budget and distributes resources in order to maximise the system performance. In this paper, we present PeabraiN, a collection of PIPE tool-compliant modules for performance estimation and resource optimisation based on bounds computation for Stochastic Petri Nets. The algorithms supporting the modules make an intensive use of linear programming techniques and therefore their computational complexity is low. Besides, other PN properties, such as structural enabling bound at a transition, structural marking bound at a place or visit ratios computation, are added to PIPE tool as well.

back
-----------------------------------------------------------

Improving Dynamic Partial Order Reductions for Concolic Testing

Olli Saarikivi, Kari Kähkönen and Keijo Heljanko

Testing multi-threaded programs is hard due to the state explosion problem arising from the different interleavings of concurrent operations. The dynamic partial order reduction (DPOR) algorithm by Flanagan and Godefroid is one solution to reducing this problem. We present a modification to this algorithm that allows it to exploit the commutativity of read operations and provide further reduction. To enable testing of multi-threaded programs that also take input we show that it is possible to combine DPOR with concolic testing. We have implemented our modified DPOR algorithm in the LCT concolic testing tool. We have also implemented the sleep set algorithm, which can be used along with DPOR to provide further reduction. As the LCT tool was designed for distributed use we have modified the sleep set algorithm for use in a distributed testing client-server setting.

back
-----------------------------------------------------------

Introducing Fairness into Compositional Verification via Unidirectional Counters

Antti Siirtola, Antti Puhakka and Gerald Lüttgen

We equip labelled transition systems (LTSs) with unidirectional counters (UCs) which can be initialised to an arbitrary positive value and decremented but not incremented. This formalism, called UCLTSs, enables one to express fairness properties of concurrent systems in a finite form and reason about them in a compositional, refinement-based fashion. Technically, we first show how to apply CSP-style parallel composition and hiding directly on UCLTSs while maintaining compositionality. As our main result, we prove that the refinement checking of UCLTSs under the chaos-free failures-divergences semantics can be reduced to two decidable tasks that can be solved by the popular FDR2 and NuSMV model checkers, respectively.

back
-----------------------------------------------------------

Affine Data-Flow Graphs for the Synthesis of Hard Real-Time Applications

Adnan Bouakaz, Jean-Pierre Talpin and Jan Vitek

Data-flow models ease the task of constructing feasible schedules of computations and communications of high-assurance embedded applications. One key and open issue is how to schedule data-flow graphs so as to minimize the buffering of data and reduce end-to-end latency. Most of the proposed techniques in that respect are based on either static or data-driven scheduling. This paper looks at the problem in a different way by considering priority-driven preemptive scheduling theory of periodic tasks to execute a data-flow program.

Our approach to the problem can be detailed as follows:

  • We propose a model of computation in which the activation clocks of actors are related by affine functions. The affine relations describe the symbolic scheduling constraints of the data-flow graph.
  • Based on this framework, we present an algorithm that computes affine schedules in a way that minimizes buffering requirements and, in addition, guarantees the absence of overflow and underflow exceptions over communication channels.
  • Depending on the chosen scheduling policy (earliest-deadline first or rate-monotonic), we concretize the symbolic schedule by defining the period and the phase of each actor. This concretization guarantees schedulability and maximizes the processor utilization factor.

back
-----------------------------------------------------------

A Trace-Based Semantics for Responsiveness

Walter Vogler, Christian Stahl and Richard Müller

In the context of asynchronously communicating services, responsiveness guarantees that a service and its environment have always the possibility to communicate. The responsiveness preorder describes when one service can be replaced by another such that responsiveness is preserved. We study responsiveness for possibly unbounded services with and without final states, and present for both preorder variants a semantical characterization based on traces. Surprisingly, the preorders turn out not to be precongruences, and for both we characterize the coarsest precongruence which is contained in the respective preorder.

back
-----------------------------------------------------------

Reachability Analysis of P-Time Petri Nets with Parametric Markings

Hanifa Boucheneb and Kamel Barkaoui

This paper deals with the verification of P-Time Petri nets with parametric markings. It investigates the verification of reachability properties in the case of an upward-closed set of initial markings. We propose an efficient state space abstraction which preserves markings and firing sequences. In such an abstraction, sets of states reached by the same firing sequence are over-approximated and represented concisely by their minimal marking and the union of their time domains (abstract states). We show that even if abstract states may contain some extra states, we can retrieve the exact reachable states and then verify reachability properties.

back
-----------------------------------------------------------

A High-level Strategy for C-net Discovery

Marc Sole and Josep Carmona

Causal nets have been recently proposed as a suitable model for process mining, due to their declarative semantics and compact representation. However, the discovery of causal nets from a log is a complex problem. The current algorithmic support for the discovery of causal nets comprises either fast but inaccurate methods (compromising quality), or accurate algorithms that are computational demanding, thus limiting the size of the inputs they can process. In this paper a high-level strategy is presented, which uses appropriate clustering techniques to split the log into pieces, and benefits from the additive nature of causal nets. This allows amalgamating structurally the discovered Causal net of each piece to derive a valuable model. The claims in this paper are accompanied with experimental results showing the significance of the high-level strategy presented.

back
-----------------------------------------------------------

Verifying Performance of Supervised Plants

Jasen Markovski and Michel Reniers

Supervisory control theory deals with synthesis of models of supervisory controllers (so-called supervisors) based on discrete-event models of the uncontrolled system (referred to as plants) and the control requirements. Typically, the theory ensures safe and nonblocking behavior of the supervised plant, i.e., the coupling of the plant and the supervisor in a feedback supervisory control loop. Extensions, like optimal supervision, additionally ensure that given performance requirements are met as well. Unfortunately, ensuring optimality during synthesis of the supervisor proved to be computationally challenging. In this paper we propose a framework that treats the Markovian delays syntactically and employs existing synthesis tools for non-stochastic plants. Afterwards, by means of a direct transformation, relying on the memoryless property of Markovian delays, we derive the underlying Markov process, which is suitable for stochastic model checking. This enables us to circumvent the problem of optimality during the synthesis procedure. To model the control loop and prove the correctness of the transformation, we employ a recently developed process theory for supervisory control of stochastic systems.

back
-----------------------------------------------------------

State Space Analysis using Symmetries on Decision Diagrams

Souheib Baarir, Maximilien Colange, Fabrice Kordon and Yann Thierry-Mieg

There are two well-accepted techniques to tackle combinatorial explosion in model-checking: exploitation of symmetries and the use of reduced decision diagrams. Some work showed that these two techniques can be stacked in specific cases.

This paper presents a novel and more general approach to combine these two techniques. Expected benefits of this combination are:

  • in symmetry-based reduction, the main source of complexity resides in the canonization computation that must be performed for each new encountered state; the use of shared decision diagrams allows one to canonize sets of states at once.
  • in decision diagrams based techniques, dependencies between variables induce explosion in representation size; the manipulation of canonical states allows to partly overcome this limitation.

We show that this combination is experimentally effective in many typical cases.

back
-----------------------------------------------------------

Modularity and Part-Whole Compositionality for Computing the State Semantics of Statecharts

Luca Pazzi and Marco Pradelli

The paper discusses modularity and compositionality issues in state-based modeling formalisms and presents related recent research results. Part-Whole Statecharts provide modular constructs to traditional Statecharts in order to allow incremental and fully reusable composition of behavioral abstractions, enforcing explicitly the coordinated systemic behavior and bringing benefits to subsequent modeling and implementation phases. The paper shows that Part-Whole Statecharts have a computable semantics, which can be specified through a constraint-driven specification method. Such a method allows to specify and verify the intended meaning of states directly at design time, thus avoiding to employ less effective verification techniques, such as exhaustive testing or model checking.

back


Page last modified: 2012/05/11