Important Dates

Full papers: April 15, 2011
Notification: May 30, 2011
Final version: June 13, 2011
Conference: August 22-26, 2011

contact

MFCS 2011 PC chairs:
Filip Murlak & Piotr Sankowski


Invited Speakers

Alexandr Andoni, Microsoft Research Mountain View

Jörg Flum, Albert-Ludwigs-Universität Freiburg

Mai Gehrke, Radboud Universiteit Nijmegen

Daniel Kirsten, Humboldt-Universität zu Berlin

Prasad Raghavendra, Georgia Tech

Paul Wollan, Sapienza University of Rome

 

Abstracts

Nearest Neighbor Search in High-Dimensional Spaces by Alexandr Andoni

Nearest neighbor search in high-dimensional spaces is a ubiquitous problem in searching and analyzing massive data sets. In this problem, the goal is to preprocess a set of objects (such as images), so that later, given a new query object, one can efficiently return the object most similar to the query. This problem is of key importance in several areas, including machine learning, information retrieval, image/video/music clustering, and others. For instance, it forms the basis of a widely used classification method in machine learning: to label a new object, just find a similar but already-labeled object. Nearest neighbor search also serves as a primitive for other computational problems such as closest pair, minimum spanning tree, or variants of clustering.
In this talk, I will survey the state-of-the-art for the nearest neighbor search. I will give a flavor of the main algorithms and techniques involved, both some classical and some more recent ones. Along the way, I will highlight the current challenges in the area.

Invariantization of listings by Jörg Flum

We consider a halting problem for nondeterministic Turing machines and show via invariantization of listings the relationship of its complexity to

  • the existence of almost optimal algorithms for the set of propositional tautologies;
  • the existence of hard sequences for all algorithms deciding a given problem;
  • the existence of logics capturing polynomial time.

Duality and recognition by Mai Gehrke

The fact that one can associate a finite monoid with universal properties to each language recognised by an automaton is central to the solution of many practical and theoretical problems in automata theory. It is particularly useful, via the advanced theory initiated by Eilenberg and Reiterman, in separating various complexity classes and, in some cases it leads to decidability of such classes.
It turns out that this theory may be seen as a special case of Stone duality for Boolean algebras extended to a duality between Boolean algebras with additional operations and Stone spaces equipped with Kripke style relations. This is a duality which also plays a fundamental role in other parts of the foundations of computer science, including in modal logic and in domain theory.

In this talk I will give a general introduction to Stone duality and explain what this has to do with the connection between regular languages and monoids.

This is joint work with Serge Grigorieff and Jean-Éric Pin.

Some Variants of the Star Height Problem

In the talk, we present a new approach to the decidability and the first upper complexity bound to Eggan's famous star height problem from 1963: is the star height (concerning rational expressions with union, concatenation, and iteration) of a given recognizable language effectively computable?

For this, we introduce a new model of weighted automata, called distance desert automata, which are essentially a hierarchical joined generalization of distance automata by Hashiguchi and desert automata by Bala and the author. Distance desert automata compute mappings from a free monoid into the positive integers, and we are interested in the limitedness problem, i.e., in deciding whether the range of the mapping of a given distance desert automaton is finite.

Finally, we show the first upper complexity bound to the relative inclusion star height problem.

Generic techniques to round SDP relaxations by Prasad Raghavendra

This talk will survey two general approaches to rounding solutions to SDP relaxations.

`Squish and Solve' Rounding:
This technique of rounding SDPs for constraint satisfaction problems generalizes and unifies a large body of SDP-based algorithms for CSPs.  More specifically, it yields a  a generic algorithm that for every CSP, achieves an approximation at least as good as the best known algorithm in literature.  The generic algorithm is guaranteed to achieve an approximation ratio equal to the integrality gap of an SDP relaxation known to be optimal under Unique Games Conjecture. This is based on joint work with David Steurer.

Rounding Using Correlations:
Despite the numerous applications of semidefinite programming, in all but very few cases, the algorithms rely on arguably the simplest SDP relaxation.  Hence the power of stronger semidefinite programming relaxations is not yet harnessed, leaving open the possibility that fundamental optimization problems like MaxCut and Vertex Cover could be approximated better using SDPs. The dearth of algorithms based on stronger SDP relaxations stems from the lack of general techniques to round these relaxations.

In this work, we present a technique to round SDP hierarchies using the underlying correlations between variables. To demonstrate the technique, we present two applications of the technique, one to the problem of MaxBisection and other to general 2-CSPs on random graphs.  This is based on joint works with David Steurer, Ning Tan and Boaz Barak.

New Proofs in Graph Minors by Paul Wollan

The theory of Graph Minors developed by Robertson and Seymour has had wide reaching consequences in graph theory and theoretical computer science. The main result of the theory, known as the graph minor structure theorem, approximately describes the structure of graphs which do not contain a xed graph as a minor. The structure theorem is the central piece of many of the algorithmic applications of graph minor theory. Unfortunately, the statement of this structure theorem is quite complex and the only proof until now requires a series of 16 technical papers.
In joint work with Ken-ichi Kawarabayashi and Robin Thomas, we have worked to make these techniques more accessible to the wider community. In
this talk, I will discuss a new constructive proof which is much simpler than the original proof. Additionally, the new proof also immediately gives a polynomial time algorithm to nd the decomposition and dramatically improves the constants obtained in the theorem.