Dan Bogdanov: Framework for Fast Prototyping of Secure Computations

Ettekande kiled. [pdf]

Abstract: Many organisations often gather sensitive data from individuals. Due to legal restrictions on how the data can be processed, it is hard or even impossible to learn global properties or trends from the collected data. Moreover, the privacy of the participants is not guaranteed when the administrative measures fail. Therefore, we apply secure multi-party computation methods to make data aggregation privacy-preserving and secure. We *have designed and implemented* a programming framework which allows us to quickly prototype and evaluate privacy-preserving aggregation and data mining algorithms.

Meelis Kull: Studying Alternative Splicing

Ettekande kiled. [pdf]

Abstract: In prokaryotes (organisms with no nuclei in cells) there is about the same number of proteins and genes. Alternative splicing is a gene regulation phenomenon which results in more than one protein per gene in eukaryotes (organisms with nuclei). This offers higher efficiency because information can be stored much more economically. As alternative splicing favours the emergence of new proteins also, it is hypothesized to be one of the most powerful tools in the evolution of eukaryotic organisms.

We have studied tissue-specific alternative splicing using the EST-data (Expressed Sequence Tags) aiming to discover proteins which are produced only (or mostly) in some specific tissue. I am going to describe the biological system of splicing in mathematical terms as well as present the tools we have developed to achieve our goals. (joint work with Jaak Vilo)

Reina Käärik: Generalised edit distance

Ettekande kiled. [pdf]

Abstract: The simplest way to measure the similarity between two strings is to use edit distance, also known as the Levenshtein distance. The conventional edit distance considers only a limited number of transformations that all have the same weight. In this work we propose an extended edit distance algorithm that works with a larger set of transformations that can have different weights. In addition, we extend the edit distance to work with the multibyte encodings like UTF8. To achieve greater efficiency, our implementation of the extended edit distance algorithm uses tries. Finally, we consider a version of the edit distance algorithm that learns the weights of transformations from the weights of existing transformations.

Peeter Laud: Dependency-graph-based protocol analysis II

Ettekande kiled. [pdf]

Abstract: In a dependency graph, nodes correspond to operations. Edges between nodes connect producers of data to their consumers. A dependency graph makes explicit, which data is used where; this information is not so clearly presented in other types of program representation, such as abstract syntax trees or control flow graphs. Certain forms of program dependency graphs can also be given semantics, although it is in general more tricky than for more orthodox representations.

Program (or protocol) dependency graphs make certain program transformations convenient, in particular those that correspond to definitions of cryptographic primitives. These transformations do not observably change the semantics of a protocol, but may change it syntactically in such a way that its security becomes more obvious. In our talk, we present a prototype protocol analyser, the transformations that we have implemented, and the insights gained.

Sven Laur: Two views on cryptographic reductions: Closing the gap between cryptography and program analysis

Ettekande kiled. [pdf]

Abstract: There are two main tasks in cryptography. One can either try to discover or enhance cryptographic primitives such as encryption or alternatively try to combine existing primitives in order to meet some practical design criterion. Although a construction of complex protocols is clearly an engineering task, security analysis of such protocols often resembles to witchcraft; sometimes the results are clearly wrong or too vague. Hence, many famous cryptographers have tried to come up with rigorous methods. Nevertheless, we still do not have formal machine verifiable methodology that is common in other engineering disciplines.

Usually, the security definitions are specified as a game between an adversary and a challenger, where the challenger conducts the game. The latter complicates the security proofs, as one must give a reduction - construct a new adversary from the old one. However, there is an alternative - one can specify the security definitions in adversary-centric way [BR06]. Then the security proof contains no reductions. Instead one performs a set of clearly specified code transformations. As a result security proofs for many protocols can be formally constructed and verified. Moreover, the analysis is precise and sharp opposed to polynomial security model.

[BR02] Bellare and Rogaway, Code-Based Game-Playing Proofs and the Security of Triple Encryption, Eurocrypt 2006.

Helger Lipmaa: Cryptographic techniques in privacy-preserving data-mining

Ettekande kiled. [pdf]

Abstract: The primary task of data-mining is to develop models about aggregated data, for example about the habits of Internet users, about loyal customers, etc. The main question of privacy-preserving data-mining (PPDM) is, can we develop accurate models without access to precise information in individual data records? The latter question has proven to be difficult to solve. One can divide the techniques of PPDM in two large groups: the randomization approach (related to the well-known statistical Randomized Response Technique), and the cryptographic approach.

In this tutorial, we will give an overview of the cryptographic approach that has many benefits but also several drawbacks. A major sociological problem with the cryptographic approach is that cryptographic and data mining communities are currently distinctly separated. On the one hand, without a special cryptographic training one is tended to propose protocols for PPDM that are not as secure or at least not as efficient as the up-to-date cryptographic protocols. On the other hand, without special training in data mining, one can often try to solve the wrong problem. In particular, the security definitions for different PPDM tasks are application-dependent and have their roots in data mining.

A major technical problem is that data mining tasks routinely deal with huge amounts of data. Mining this data, even without any privacy concerns, is non-trivial. Cryptography, even on much smaller amounts of data, is also non-trivial. Mixing the two usually results in PPDM solutions that are very complicated, and --- most importantly --- unlikely to be practical. Given this, it is important, in a collaboration between two communities, to come up with security definitions for PPDM that potentially have efficient cryptographic implementations.

Härmel Nestra: Algorithms for multi-dimensional search

Ettekande kiled. [pdf]

Abstract: Binary search for one-dimensional sorted array is well-known. In the seminar, we try to design efficient algorithms for searching from multi-dimensional sorted arrays.


Ulrich Norbisrath: Configuring eHome systems

Ettekande kiled. [pdf]

Abstract: New developments and decreasing costs of electronic appliances enable the realization of pervasive systems in our daily environment. In this work, I focus on eHome systems. The price of individual development and adaption of the software making up these systems is one of the major problems preventing their large-scale adoption. In this talk, I introduce an approach built upon functionality composition for automatic service configuration in different environments. The repetitive development process is transformed to a single development process followed by a repetitive configuration process. This configuration process is supported by our tool suite, the eHomeConfigurator. The result is a deployable configuration graph, capable of describing dependencies and contexts of components in the eHome field. The tool suite is used to configure and deploy various services on different home environments. Compared to the classical development process, the effort for setting up eHome systems is reduced significantly. For demonstration purposes, I will also bring a small eHome-model made from Lego to show this configuration process and various eHome services. The demonstration will provide a platform to discuss eHome systems and future extensions (i.e. voice control) in general.

Rein Prank: Interactive problem solving environment T-algebra

Ettekande kiled. [pdf]

Abstract: T-algebra is a project for creating interactive problem solving environment for basic school expression manipulation exercises: calculation of the values of numerical expressions; operations with fractions; solving of linear equations, inequalities and linear equation systems; operations with monomials and polynomials. The presentation demonstrates and motivates solution step interface and error diagnostics developed in T-algebra.

Jaanus Pöial: Tüübikontrolli vahendid tüübivabade magasinkeelte jaoks

Ettekande kiled. [pdf]

Abstract: Ettekandes kirjeldatakse staatilise tüübikontrolli raamistikku magasinipõhiste madaltaseme programmeerimiskeelte jaoks, näit. Forth, PostScript, jt. Magasinipõhist arhitektuuri leiame mitmetes populaarsetes virtuaalmasinates - CLR, JVM, samuti riistvaras. Magasinipõhise virtuaalmasina kood on samuti näide magasinkeelest, mida on tarvis analüüsida. Suurel osal juhtudest on magasinkeel ise tüübivaba - tüüpidega seotud kitsendused on osa koodistandardist ja programmeerimiskultuurist, mitte keelest. Tuuakse sisse n.-ö. "väline" analüüsivahend, mis lubab teatud tingimustel leida koodist kohti, kus magasini kasutamine äratab kahtlust (aga ei pruugi sellegipoolest vale olla). Prototüüpanalüsaatori tuum on kirjutatud Javas.

Ando Saabas: Type systems for optimizing stack-based code

Ettekande kiled. [ppt]

Abstract: We give a uniform type-systematic account of a number of optimizations and the underlying analyses for a bytecode-like stack-based low-level language, including analysis algorithms and soundness proofs. Specifically, we treat dead store instructions, load-pop pairs, duplicating load instructions, store-load pairs. The load-pop pairs and store-load pairs elimination optimizations are built on top of bidirectional analyses, facilitating correct elimination of instruction pairs spanning across basic block boundaries. As a result, no assumptions are needed about input code (it need not be the compiled form of a high-level source program, the stack need not be empty at basic block boundaries and not even need it be checked for safety before the analysis). The strongest analysis algorithms and soundness proofs are simple and uniform. (Joint work with Tarmo Uustalu.)

Kristo Tammeoja: Approximate Search of Regular Expressions Using Bit-Parallel algorithms

Ettekande kiled. [pdf]

Abstract: Approximate matching is an important problem of information retrieval and bioinformatics applications. We have implemented an approximate search algorithm for matching a subset of regular expressions. The basic principles of the algorithm have been previously described in Navarro's article "NR-grep: a Fast and Flexible Pattern Matching Tool". The algorithm uses Glushkov method for generating the non-deterministic automaton, and a bit-parallel method for matching this automaton with a small number of allowed mismatches, insertions, and deletions. We have also introduced the method for including error-free regions into the expressions. The implementation of the algorithm in C++ has been compared to other libraries for exact regular expression matching, where the method is approximately comparable with existing programs. For approximate matching we found only one comparable tool. Our implementation beats that by boosting the speed about 100-fold. The method is included in our Sally text algorithm library that is currently under development. (Joint work with Jaak Vilo.)

Tarmo Uustalu: Categorical views on bottom-up tree transducers

Ettekande kiled. [pdf]

Abstract: We consider three types of bottom-up tree transducers of increasing levels of generality: relabelling (branching-preserving), rebranching (layering-preserving) and relayering (general) tree transducers. For each type, we show the equivalence of three categories, in fact, arrows: the transducers of this type (modulo bisimilarity), the (co/bi)-Kleisli category of a comonad/distributive law, and a subclass of tree functions. (Joint work with Ichiro Hasuo and Bart Jacobs.)

Eero Vainikko: Parallel Solution of PageRank Problem

Ettekande kiled. [pdf]

Abstract: In this talk we are dealing with PageRank, the well known Web-page ranking algorithm introduced by Google. We start with giving a short survey on different PageRank problem interpretations and ways of efficient parallel solution of the problem. We investigate more closely the representation of the problem as a system of sparse linear equations, possible methods for efficient solution of the system and prospective implementation in our own parallel black-box linear system solver DOUG (Domain Decomposition on Unstructured Grids,

Vesal Vojdani: Completely generic as-path-sensitive-as-necessary multithreaded API analysis

Ettekande kiled. [pdf]

Abstract: We have been working on a static analyzer for many years, and while it is almost ready for a decent tool-demo, we are also thinking about presenting some of the underlying ideas. In this talk, I will explore ways to generalize the methods used in our analyzer, and maybe solicit some feedback on how to make these ideas sufficiently interesting to be written down more formally. One idea is to exploit the detailed information that the analyzer already computes for the data-race analysis to check for other similar properties. Ideally, we would like to develop a specification framework where such properties can be expressed by the user. The key mechanism in achieving these goals is a generic method of transforming a simple analysis specification into a path-sensitive analysis. This will be the main focus of the talk.

Jan Willemson: Algorithmic Generation of Path Fragment Covers for Mobile Robot Path Planning

Ettekande kiled. [pdf]

Abstract: This paper describes a framework for mobile robot path planning in dynamic environments where the environments are represented by grid maps. The planning heuristics considered in the paper is to generate the minimal set of paths so that all the possible path fragments of two edges are covered. We show that the number of required paths is linear in the dimensions of the grid (thus making application of the approach realistic and scalable). The main contribution of the paper is the description of all the minimal covers by means of an efficient algorithm. We prove its correctness and conclude that there are $2^{(m-1)(n-2)+(m-2)(n-1)}$ minimal path fragment covers in an $m\times n$ grid.

Peeter Laud
Helger Lipmaa
Tarmo Uustalu
Varmo Vene
Viimane uuendus 22.01.2007