*Abstract:*
Large organizations often need to maintain several variants of a
process model. Such variants may stem from distinct products,
different types of customers, dissimilar legal requirements across
countries, or idiosyncratic choices made by multiple business units
over time. In some scenarios, analysts need to find ways to
consolidate multiple process variants into a single one. In this
setting, analysts need to accurately understand the differences
between multiple variants in order to determine how to reconcile them.

This talk will present the current work addressing the problem of explaining the differences between a pair of acyclic process models in terms of simple and intuitive statements. The presented process model differencing takes the behavior of the processes as the aspect for comparison. The presented approach is based on two types of Event Structures (a well know formalism for expressing behavioral relations): Prime event structures and Asymmetric event structures. It is found that the high level of node duplication inherent to prime event structures hinders on the usefulness of the difference diagnostics that can be extracted thereon. Accordingly, the current approach defines a method for producing (asymmetric) event structures with reduced amount of duplication.

Slides of the talk: [html]

*Abstract:*
Time Based Art (TBA) refers to art which has a 'duration' as well as a
'dimension'. The audience watches as the installation develops before
their eyes. Time Based media works demand that we shift our gaze
beyond the material to embrace the less tangible aspects of an
installation. This performance will be an exercise in live coding, a
new comer to the time based art scene, which celebrates artists from
across and in-between all mediums, and activates the entire community
with art and ideas. In this performance I will code up an interpreter
for Gödel's System T and use it to add two and two.

Slides of the talk: [pdf]

*Abstract:*
Turing categories capture abstractly the notion of computability. A
natural question to ask, therefore, is: "what categories can be the
total maps of a Turing category". For example, is it possible to have
a Turing category in which the total maps are precisely the polynomial
time maps?

One can abstractly completely characterize categories which can be the total maps of Turing categories and this tells one that there are indeed Turing categories whose total maps belong to maps with low complexity. However, this abstract result is rather unsatisfactory as the proof is completely unrelated to complexity! The purpose of the talk is to show how one can build Turing categories with total maps belonging to various complexity classes using the basic tools of complexity theory itself!

*Abstract:*
The growing digital repository of scientific literature (MEDLINE)
demands novel text mining approaches to search text, extract relevant
terms, and understand the relationship between terms in an acceptable
time and space. One such problem is mining MEDLINE to find links
between genes and environmental risk factors for a disease. In this
talk, I describe a novel text mining approach of using regular
expression learning to recognize gene names, search co-occuring
environmental risk factors, and finally construct a network between
genes and environmental risk factors for a disease. Preliminary
results indicate that such an approach can distill links between
interesting terms from large text corpus and visualize graphically.

*Abstract:*
Several decision procedures for context-free grammars work on grammars
in some normal form. The aim of this work was to implement verified
conversions between context-free grammars in general and normal forms.
This was done by implementing and verifying correctness of four
transformations in the Agda dependently typed programming language.
These transformations can be applied independently, giving different
versions of the original grammar. When all four of them are applied in
the right order, then the Chomsky normal form is achieved.

The transformations were proved to be sound and complete. Soundness and completeness state that parse trees for the normalized grammar and parse trees for the original grammar are interconvertible. Since the proofs are constructive, they are in fact certified parse tree conversion programs. (Joint work with Tarmo Uustalu.)

*Abstract:*
Process mining is a family of techniques to discover business process
models and other knowledge of business processes from event logs.
Existing process mining techniques are geared towards discovering
models that capture the order of execution of tasks, but not the
conditions under which tasks are executed -- also called branching
conditions. One existing process mining technique, namely ProM's
Decision Miner, applies decision tree learning to discover branching
conditions composed of atoms of the form "v op c" where "v" is a
variable, "op" is a comparison predicate and "c" is a constant. In
this talk, I will present a more general technique to discover
branching conditions where the atoms are linear equations or
inequalities involving multiple variables and arithmetic operators.
The proposed technique combines invariant discovery techniques
embodied in the Daikon system with decision tree learning techniques.

Slides of the talk: [pdf]

*Abstract:*
Functional reactive programming (FRP) allows programs to deal with
temporal aspects in a declarative way. FRP is based on behaviors and
events, which denote time-varying values and values attached to times,
respectively. Recently, we added the notion of process to FRP. A
process combines continuous and discrete aspects and generalizes
behaviors and events. In this talk, we develop a class of categorical
models for FRP with processes. We call these models abstract process
categories.

*Abstract:*
The presentation gives an overview of recent advances in multi-party
computation. We start with a brief overview of the SPEEDZ protocol,
which combines oblivious message authentication and somewhat fully
homomorphic encryption to generate a massive amount of Beaver triples.
In this presentation, we discuss whether the same throughput of Beaver
triples can be achieved using only an additively homomorphic
cryptosystem and what are the biggest performance bottlenecks.

*Abstract:*
The increasing availability of event data recorded by contemporary
information systems makes process mining a valuable instrument to
improve and support business processes. Starting point for process
mining is an event log. Typically, three types of process mining can
be distinguished: (a) process discovery (learning a model from example
traces in an event log), (b) conformance checking (comparing the
observed behavior in the event log with the modeled behavior), and (c)
model enhancement (extending models based on additional information in
the event logs, e.g., to highlight bottlenecks). Existing process
mining techniques mainly use procedural process modeling languages for
describing the business processes under examination. However, these
languages are suitable to be used in stable environment where process
executions are highly predictable. In turbulent environments, where
process executions involve multiple alternatives, process models tend
to be complex and difficult to understand. In this talk, we introduce
a new family of process mining techniques based on declarative
languages. These techniques are very suitable to be used for analyzing
less structured business processes working in environments where high
flexibility is required. These techniques have been implemented in the
process mining tool ProM and range from process discovery to models
repair and extension, to offline and online conformance checking.

*Abstract:*
The so-called NoSQL movement is driven largely by the need for
clustered deployment of applications in cloud computing environments.
Because the relational model does not lend itself easily to horizontal
scaling, a number of key-value based aggregate oriented data stores
have recently been developed to rise to the challenge.

Currently the space of NoSQL solutions is heterogenous with no standardization between different data stores. However examining the capabilities of different NoSQL stores reveals a pattern common to all of them. This pattern, based on the "get", "put" and "delete" operations common to all data stores, allows for the introduction of a unifying layer of abstraction for different key-value based data stores. We propose and outline the details of a software framework implementing this abstraction. As an adapter to different data sources this framework would simplify the work of application developers programming against different back end interfaces.

Slides of the talk: [pdf]

*Abstract:*
SecreC is a high-level imperative programming language for
implementing privacy-preserving applications in Sharemind framework.
The third version of Sharemind supports multiple privacy-preserving
computation schemes and concurrent use of them. The new version of
SecreC facilitates this functionality via protection domain
polymorphic functions. In this talk, we present and justify design of
the new version of SecreC and present a brief overview of
formalizations that helped guide the design and implementation of the
language.

*Abstract:*
Additive combinatorics is hard to define, as Ben Green admits in his
book review to "Additive Combinatorics" (2006) by Terence Tao and Van
Vu. Here we shall be considering a simple problem concerning sum sets
of natural numbers, without the need to go into the quite advanced
machinery developed to tackle more complicated additive combinatorics
problems. In the paper I will be concentrating on, we came up with our
results by computer experimentation - telling us what could and needed
to be proved.

Motivated by a question of Sarközy, we investigate (in a 2009 paper by Benevides, Hulgan, Lemons, Palmer, myself and Wheeler) sufficient conditions for existence of (infinite) sets of natural numbers $A$ and $B$ such that the number of solutions of the equation $a+b = n$ where $a \in A$ and $b \in B$ is monotone increasing in $n$. We also examine a generalized notion of Sidon sets. That is, we examine sets $A$, $B$ with the property that, for every $n \ge 0$, the equation above has at most one solution, i.e., all pairwise sums are distinct. We shall characterize such pairs of sets where, in addition to this Sidon property, $A+B$ (defined as $\{a+b | a\in A, b\in B \}$) is an interval of integers.

In the open problems / work in progress section the buzz word will be hypergraph, in addition to sum set and Sidon set. The part on hypergraphs is very much recent and raw ideas.

Slides of the talk: [pdf]

*Abstract:*
In this survey talk we discuss the main milestones in the development
of the modern coding theory. We start with the celebrated Shannon
Channel Coding Theorem, and the related motivation for development
of good practical codes. We then present some important steps
in the development of coding theory, including Reed-Solomon codes,
their list-decoding and application to secret-sharing in cryptography,
turbo codes and low-density parity-check codes. Finally, we will talk
about new applications of coding theory in flash memories, network
communications, gossip algorithms, distributed storage and biology.

Slides of the talk: [pdf]

Demo code: [SelectionSort.ssc]

*Abstract:*
Spec# as a programming language is an extension for C# with non-null
types, code contracts and object invariants. Using program
specifications developer defines what he means about the program
correctness and the verification tool can check consistency between
specification and the program. In the talk I will explain contracts
usage and show examples. The language and the tool is developed in
MSR.

Slides of the talk: [pdf]

*Abstract:*
In a paper presented at SOS 2010, we developed a framework for
big-step semantics for interactive input-output in combination with
divergence, based on coinductive and mixed inductive-coinductive
notions of resumptions, evaluation and termination-sensitive weak
bisimilarity. In contrast to standard inductively defined big-step
semantics, this framework handles divergence properly; in particular,
runs that produce some observable effects and then diverge, are not
``lost''. Here we scale this approach for shared-variable concurrency
on a simple example language. We develop the metatheory of this
semantics in a constructive logic.

Slides of the talk: [pdf]

*Abstract:*
We present a formalization of the Max-flow Min-cut theorem in HOL
Light. The Max-flow Min-cut theorem is a major result in graph theory
and an application of the strong duality theorem in linear
programming. We formalize a graph-theoretical version of the proof of
this theorem. The formalization is nontrivial not only for the
complexity of this mathematical proof per se, but also because of the
lack of a basic library for graphs in the standard repositories of HOL
Light.

*Abstract:*
Side-effecting constraint systems were originally introduced by Seidl,
Vene, and Müller-Olm for the analysis of multi-threaded code. In this
talk, I present our recent generalization of this idea to a unified
framework for interprocedural program analysis. Side-effecting
constraints allow us tweak the amount of context-sensitivity and
combine context-sensitive analyses of local properties with
flow-insensitive analyses of global properties. Side-effecting
constraint systems thus form the ideal basis for building
general-purpose infrastructures for static analysis. They constitute
the core of the analyzer generator Goblint, which we used to
practically evaluate this approach on real-world examples.
(Joint work with Kalmer Apinis and Helmut Seidl; APLAS 2012.)

*Abstract:*
In this talk we introduce the secure subset covering problem, state it as a
binary integer linear programming task and design a genetic algorithm to solve
it. The algorithm has been implemented and benchmarked on Sharemind multi-party
computation engine. We will discuss both the implementation details and
benchmarking results.

*Abstract:*
We introduce the concept of quantum alternation as a generalization of
quantum nondeterminism. We define the first quantum alternating Turing
machine (qATM) by augmenting alternating Turing machine (ATM) with a
fixed-size quantum register. We focus on space-bounded computation,
and obtain the following surprising result: One-way qATMs with
constant-space (one-way alternating quantum finite automata (1AQFA))
are Turing-equivalent. Then, we introduce strong version of qATM: The
qATM that must halt in every computation path. We show that strong
qATMs (similar to private ATMs) can simulate deterministic space with
exponentially less space. This leads to shifting the deterministic
space hierarchy exactly by one-level. We also focus on realtime
versions of one-way AQFAs (rtAQFA) and then obtain many interesting
results: (i) any language recognized by a rtAQFA is in quadratic
deterministic space, (ii) two-alternation is better than
one-alternation, (iii) two-alternation is sufficient to recognize a
NP-complete language and so any language in NP can be recognized by a
poly-time log-space qATM with two alternations, (iv) there-alternation
is sufficient to recognize a language which is complete for second
level polynomial hierarchy and so any language in the second level of
polynomial hierarchy can be recognized by a poly-time log-space qATM
with three alternations.

Liina Kamm

Peeter Laud

Helger Lipmaa

Tarmo Uustalu

Varmo Vene

Viimane uuendus 6.02.2013