Ettekande kiled. [pdf]

*Abstract:*
The status of infinity has been a philosophical battleground since
Antiquity. Greek philosophers already debated whether it can be
conceived as existing in actuality or just as potential unboundedness.
The treatment of actual infinite objects has been seen across history
as a province of metaphysics or as a technical trick to solve some
mathematical problems. Starting with Cantor, infinite sets of
different sizes have been given a precise mathematical status, leading
eventually to modern set theory. However, fierce criticism was fired
at this conception by constructivist and predicativist thinkers.

In recent decades, significant advances have been made in the constructive understanding of infinity. At first it may seem that computer science should be impervious to any notion of infinite object: computers are discrete machines with a finite memory. Surprisingly, mathematical methods to represent infinite objects and compute with them have been devised. It even turns out that some computational problems are best solved by using infinite data structures.

I will give a brief historical review of the computational treatment of infinity and explain the basic mathematical notions that are being developed in this field.

Ettekande kiled. [ppt]

*Abstract:*
A business process model is a representation of the way an
organization operates to achieve a goal, such as delivering a product
or a service. For example, a business process model may describe the
activities (both manual and automated) performed within an insurance
company to handle an insurance claim lodged by a customer. Business
process models serve multiple purposes. On the one hand, business and
system analysts use process models to identify and evaluate business
improvement options or to define system requirements. On the other
hand, software developers are concerned with the automated execution
of business processes based on detailed models. Depending on the
purpose, a business process may be modeled at different abstraction
levels and using different languages. This talk will review various
challenges raised by the definition of model transformations aimed at
bridging between different business process modeling languages. The
talk will discuss transformations from popular business process
modeling notations to variants of Petri nets and state machines to
support automated analysis, as well as two-way transformations from
high-level to executable process modeling languages.

Ettekande kiled. [ppt]

*Abstract:*
With the emergence of drug resistant strains, Staphylococcus aureus
represents a significant human health threat. Identifying the family
types efficiently and quickly is crucial in community settings. Here,
we develop a novel hybrid sequence algorithm approach to type this
bacterium using only its spa gene, a method that is cheaper than the
Multi locus sequence typing (MLST) method which uses 7 housekeeping
genes. Our resulting spa groups correlate strongly with those defined
by MLST. Furthermore, our methods can be fine-tuned to be more
discriminative so that new strains can be identified. Finally, we
performed data projection to visualize the data and the relationships
between isolates. The proposed methodology provides a promising new
approach to molecular epidemiology.

Ettekande kiled. [pdf]

*Abstract:*
We are developing a method for privacy-preserving data processing. Our
solution comes in the form of a distributed virtual processor that can
execute privacy-preserving operations. The data is stored in a
secret-shared form and we have designed a suite of multi-party
computation protocols for performing computations on such data. We
have proven that these protocols preserve privacy also when composed
with each other, so we can write programs for the processor and expect
them not toleak data.

My talk will be about Sharemind - our software implementation of this processor. We are constantly measuring the performance of operations on Sharemind and exploring methods for making the system faster. I will give an overview of the benchmarks we've done and discuss the conclusions we've made.

You can find out more about Sharemind at http://sharemind.cs.ut.ee/.

Ettekande kiled. [pdf]

*Abstract:*
In model checking and model-based testing it is often necessary to
perform an equivalent of the whole state space traversal of a model.
Two general reduction techniques - partial order reduction and
symmetry reduction enable to reduce the space that needs to be
traversed in order to conclude whether certain property holds or not.
We will give an overview of the state of the art in symmetry
reduction. We will look at how scalar sets work and how they are
related to orbit problem in group theory. Then we will proceed to how
static topologies can help reducing the state space. Finally we will
look at state isomorphism based symmetry reduction and how such
symmetry reduction can be performed in several stages to reduce the
overall computational complexity. The examples are provided using
tools like Uppaal, NModel and Groove.

Ettekande kiled. [pdf]

*Abstract:*
In this talk we introduce a new combinator, inspired by comonadic
recursion schemes developed by Uustalu, Vene and Pardo. This
combinator not only generalizes the existing recursion schemes, but
also provides some interesting insights in the nature of optimal
memoizing computation.

Ettekande kiled. [pdf]

*Abstract:*
Traditionally, the core of a cryptographic reduction (proof that a
primitive A is secure on the premise that some primitive B is secure)
has been the construction of a program of the adversary successfully
attacking A, assuming access to a routine that successfully attacks B.
The proof of correctness of this program (that it indeed breaks A; and
presents the correct view to the subroutine breaking B) is an integral
part of the reduction. The size of the adversary is often substantial
and its sturcture is complex. Hence it may be nigh impossible to
verify its correctness by hand. The tool support for such a
verification is also lacking --- it is hard to formalize arguments
about (conditional) probabilities and events that happen negligibly
often.

For quite some time, the security definitions of cryptographic primitives have been formalized by stating that two experiments cannot be distinguished by any adversary. These experiments are specified as (small) programs; the definition thus states that two programs are indistinguishable in certain contexts. If we write down the construction of the primitive A (from the primitive B) as a program, too, then we can search for a subprogram in its code, also appearing in the security definition of B, and replace it with the other program from B's security definition. The transformed primitive can then be transformed again, until we reach a program that is obviously secure. The proof is thus decomposed into several steps that can all be verified independently. The steps can be made small enough, such that their verification does not pose many problems. In this way we can increase our confidence in the correctness of the proof. Tool support for automatic verification or generation would be feasible, too.

In the tutorial I will demonstrate this technique in the security proofs of cryptographic primitives, as well as in the static analysis of programs and protocols for certain security properties.

Ettekande kiled. [pdf]

*Abstract:*
We propose a service composition method based on higher order
workflows. Method allows automatic generation of higher order
workflows with precise semantics for automatic web service
composition. Implementation of proposed composition method is
developed in visual languages development framework CoCoViLa. The
implemented tool allows to describe service composition workflows and
service models in a simple visual language. It allows us to reason
about the reachability of goals on the described workflow or generate
workflows from service models. In addition BPEL or OWL-S composite
service descriptions can be generated.

Ettekande kiled. [pdf]

*Abstract:*
In this talk I will present a Dynamic Logic for deductive verification
of type-safe C programs (CDL). It allows verification of C programs
w.r.t. operation contracts and invariants. Our approach is based on
the KeY Dynamic Logic (DL) and tool, previously targeted on Java. We
have extended the KeY architecture to support language variability -
DLs for new programming languages can be implemented as a plugin to
the KeY tool. Based on this we have created KeY-C - a tool for
deductive verification of C programs.

You can find out more about the KeY project at http://www.key-project.org/.

Ettekande kiled. [pdf]

*Abstract:*
Nowadays, grids are very heavyweight and complex, especially
submitting small jobs is in no relation to the administrative support
needed to run the grid infrastructure. The initial metaphor for the
computational grid being as easy as a power grid is still unfulfilled.
The resources in Grids are centrally managed by IT professionals.
Hence, they create a considerable amount of work to administer and
maintain.

To facilitate a public usage of Grids, we extend Grid computing with techniques from peer-to-peer computing combined with instant messaging. Instant messaging is used by the vast majority of computer users. We assume, that using instant messaging techniques for the administrative part of setting up the Grid will open Grid capabilities to the public and enable thereby a personal Desktop Grid. It provides a new approach for authentication and authorization.

Analogue to peer-to-peer the here applied paradigm is called friend-to-friend computing (F2F computing) as a contraction of Grid, peer-to-peer, and friends relating to the spontaneously formed social networks. Like skype made voice-over-ip usable by everyone, F2F computing could facilitate the usage of distributed computational power.

Ettekande kiled. [pdf]

*Abstract:*
This talk gives an overview of simulation in cyber security. What
kinds of problems there exist and what can be done using computer
simulations to improve the situation. The talk introduces a few
software packages for modeling and simulation of computer networks.
Finally, some new and planned features of CoCoViLa to support
simulation in cyber security are described.

Ettekande kiled. [pdf]

*Abstract:*
Bideterministic automata are deterministic finite automata with the
property of their reversal automata also being deterministic. It can
be easily seen that any bideterministic automaton is a minimal DFA.
Bideterministic automata have previously been shown to be unique (up
to an isomorphism) minimal NFAs with respect to the number of states.
In this talk, we discuss transition minimality issues of these
automata. Bideterministic automata are transition-minimal NFAs.
However, as this transition minimality is not necessarily unique, we
also present the necessary and sufficient conditions for a
bideterministic automaton to be uniquely transition-minimal among
NFAs. Furthermore, bideterministic automata are transition-minimal
$\epsilon$-NFAs.

Ettekande kiled. [pdf]

*Abstract:*
This talk includes references at events, people and trends in
computing in the three Baltic states: Latvia, Lithuania and Estonia
during their Soviet period. The Soviet computing science and practice
could be divided into league A and league B. The first was for defense
industry and power structures, the second for wide public. Although
the Baltic computing belonged in essence to the league B, its
scientific results were on the level with the league A. The historic
perspective taken demonstrates also the changes in computer sciemce
content in general.

Ettekande kiled. [pdf]

*Abstract:*
I'll discuss an easily formulated problem: find a proof of a given
formula in intuitionistic propositional logic or an inhabitant of a
given type in simply-typed lambda calculus. I'll touch upon the issues
in backward search of sequent-style proofs (solved either by
loop-detection or a switch to a contraction-free sequent calculus) and
go then on to the cool little-known idea that I explain as forward
search of natural deduction proofs (but that also goes under the names
of the inverse method, Mints-style resolution and Stålmarck's
method). Time permitting, I'll show Augustsson's Djinn program.

*Abstract:*
In 2007, Dan Bogdanov and Sven Laur proposed a framework for secure
computations that achieves information-theoretic security against one
semi-honest adversary. However, their protocols are heavily optimized
for just three parties. In this talk we show, how to generalize one of
the protocols (secure multiplication) to $2n+1$ parties achieving
security against $n$ semi-honest adversaries.

Seminari kiled. [pdf]

Liina Kamm

Peeter Laud

Helger Lipmaa

Tarmo Uustalu

Varmo Vene

Viimane uuendus 29.01.2008