May 28, 2010
Edge-Disjoint Paths via Raecke Decompositions
Matthew Andrews (Bell Labs)
Abstract. The Edge-Disjoint Paths (EDP) problem is one of the original NP-hard problems. We are given a set of source-destination pairs in a graph and we wish to connect as many of them as possible using edge-disjoint paths.
The Edge-Disjoint Paths with Congestion (EDPwC) problem is a relaxation in which we want to route approximately as many demands as in the optimum solution to EDP but we allow ourselves a small amount of congestion on each edge.
In this talk we shall give a brief history of these problems and describe a new algorithm for EDPwC based on the notion of a Raecke decomposition. This algorithm gives a polylog(n) approximation for the number of routed demands as long as we allow poly(log log n) congestion on each edge.
Play video
Apr 30, 2010
Subexponential Algorithms for Unique Games and Related Problems
David Steurer (Princeton University)
Abstract. We give a subexponential time approximation algorithm for the Unique Games problem: Given a Unique Games instance with alphabet size k such that a 1-epspilon^6 fraction of the constraints can be satisfied, our algorithm finds in time exp(k*n^epsilon) an assignment that satisfies a 1-epsilon fraction of the constraints.
We also obtain subexponential algorithms with similar approximation guarantees for Small-Set Expansion and Multi Cut. For Max Cut, Sparsest Cut and Vertex Cover, our techniques lead to subexponential algorithms with improved approximation guarantees on subclasses of instances.
Khot's Unique Games Conjecture (UGC) states that it is NP-hard to achieve approximation guarantees such as ours for Unique Games. While our result stops short of refuting the UGC, it does suggest that Unique Games is significantly easier than NP-hard problems such as Max 3-SAT, Label Cover and more, that are believed not to have subexponential algorithms achieving a non-trivial approximation ratio.
The main component in our algorithms is a new kind of graph decomposition that may have other applications: We show that by changing an epsilon fraction of its edges, any regular graph on n vertices can be broken into disjoint parts such that the stochastic adjacency matrix of each part has at most n^epsilon eigenvalues larger than 1-epsilon^6.
Joint work with Sanjeev Arora and Boaz Barak.
Apr 23, 2010
Fast Approximation of Multicommodity Flow Problems via Dynamic Graph Algorithms
Aleksander Madry (MIT)
Abstract. We present faster (1-eps)-approximation schemes for various versions of the multicommodity flow problem. In particular, if eps is moderately small and the size of every number used in the input instance is polynomially bounded, the running times of our algorithms match -- up to poly-logarithmic factors and some provably optimal terms -- the Omega(mn) flow-decomposition barrier for single-commodity flow.
Our approach is based on combining the existing Lagrangian-relaxation algorithms for these problems with ideas from dynamic graph algorithms.
Apr 16, 2010
Differentiation in metric spaces and the geometry of graphs
James Lee (University of Washington)
Abstract. A number of generalizations of classical differentiation theory have arisen in geometry and analysis in order to handle spaces that lack some of the special structure of Euclidean space (e.g. infinite dimensional Banach spaces, or non-abelian groups, or even discrete metric spaces).
I will discuss a rather elementary, yet powerful approach called "coarse differentiation" which originally arose in work of Eskin, Fisher, and Whyte in geometric group theory. Then I will present a simple recursive construction which takes a base graph G, and produces a much larger graph that contains copies of G "at every scale."
By instantiating the construction with G being, respectively, a 4-cycle, an expander graph, the hypercube, and a discrete version of the sphere, we show how to answer four somewhat unrelated questions that arose previously in
metric embeddings.
Play video
Mar 26, 2010
Public-key Encryption Schemes with Auxiliary Input
Vinod Vaikuntanathan (IBM T.J. Watson)
Abstract. We construct public-key cryptosystems that remain secure even when the adversary is given any computationally uninvertible function of the secret key as auxiliary input (even one that may reveal the secret key information-theoretically). Our schemes are based on the decisional Diffie-Hellman and Learning with Errors problems.
Our technical contributions include:
- a novel extension of the Goldreich-Levin theorem to provide a hard-core (pseudorandom) value over large fields, and
- a proof that the learning with errors assumption holds even in the presence of auxiliary information about the secrets.
Joint work with Yevgeniy Dodis, Shafi Goldwasser, Yael Kalai and Chris Peikert.
Play video
Below is the video for the talk on Thursday, March 25, 2010.
Play video (Dept Colloquium)
Mar 5, 2010
PTAS for planar Steiner forest
Mohammad Hossein Bateni (Princeton University)
Abstract. We describe a thorough complexity study of Steiner forest in bounded treewidth graphs, planar graphs, and bounded genus graphs. We give the first polynomial-time approximation scheme for the Steiner forest problem on planar graphs and, more generally, on graphs of bounded genus. The crux of the process is a clustering procedure called prize-collecting clustering that breaks down the input instance into separate subinstances which are easier to handle. We also give a polynomial-time approximation scheme for Steiner forest on graphs of bounded treewidth, a problem that is NP-hard even on graphs of treewidth 3, and show that Steiner forest can be solved in polynomial time for series-parallel graphs (graphs of treewidth at most two) by a combination of dynamic programming and minimum cut computations.
This is joint work with MohammadTaghi Hajiaghayi and Daniel Marx.
Play video
Feb 26, 2010
The Intersection of Two Halfspaces Has Maximum Threshold Degree
Alexander Sherstov (Microsoft Research)
Abstract. The threshold degree of a Boolean function f : {0,1}^n -> {-1,+1} is the least degree of a real polynomial p with f(x) = sgn p(x). We prove that the intersection of two halfspaces on {0,1}^n has threshold degree Omega(n), which matches the trivial upper bound and solves an open problem due to Klivans (2002). The best previous lower bound was Omega(log n/log log n). Our result shows that the intersection of two halfspaces on {0,1}^n (a central unresolved challenge in computational learning theory) only admits a trivial 2^{Theta(n)}-time learning algorithm based on sign-representation by polynomials, unlike the advances achieved in PAC learning DNF formulas and read-once Boolean formulas.
Play video
Feb 12, 2010
Simpler, Better Balanced Search Trees
Siddhartha Sen (Princeton University)
Abstract. Balanced search trees are ubiquitous in computer science, appearing in database systems, the Linux task scheduler, and many other applications. In this talk, I will present new balanced search trees that are simpler
and more efficient than existing trees, showing that the design space of this classical data structure is far from being exhausted.
We begin with the rank-balanced tree, a simple relaxation of AVL trees that takes $O(1)$ amortized time to rebalance after insertions and deletions. Rank-balanced trees perform fewer rotations than red-black trees and achieve better height bounds. Using a novel analysis that relies on an exponential potential function, we show that rebalancing modifies nodes exponentially infrequently in their heights.
Building on these techniques, we show that balanced search trees remain efficient even if deletion is done without any rebalancing. These results justify the practice of many B-tree-based database systems. In the case
of binary trees, the underlying data structure must be modified to obtain such a result, leading to the relaxed AVL (ravl) tree. Ravl trees have height logarithmic in the number of insertions, and rebalancing modifies nodes exponentially infrequently in their heights. However, this seems to require $\Omega(\log\log n)$ bits of balance information at each of the $n$ nodes. Both rank-balanced trees and ravl trees show great promise in practice and are currently being taught to undergraduates.
Play video
Feb 19, 2010
How to boost the accuracy without changing the distribution
Vitaly Feldman, IBM Research at Almaden
Abstract. We consider the problem of boosting the accuracy of weak learning algorithms (Schapire 1990).
Known algorithms for this problem execute the weak learner on the same target function but over different distributions on the domain. Application of such boosting algorithms requires a distribution-independent weak learning algorithm. We demonstrate that it is also possible to boost the accuracy of a learning algorithm by
executing it on different target functions but without changing the distribution over the domain.
We then use our boosting algorithm to characterize the complexity of strong learning in the Statistical Query framework of Kearns (1993) and to learning in the agnostic model. When applied to the weak agnostic parity learning algorithm of Goldreich and Levin (1989) our boosting algorithm yields a simple PAC learning algorithm for DNF and an agnostic learning algorithm for decision trees over the uniform distribution using membership queries. These results substantially simplify Jackson's famous DNF learning algorithm (1994) and the recent result of Gopalan et al. (2008).
The talk is based on:
A Complete Characterization of Statistical Query Learning with Applications to Evolvability (FOCS 2009) and Distribution-Specific Agnostic Boosting (ICS 2010)
Play video
Jan 29, 2010
Tight bound for Gap Hamming Distance
Oded Regev, Tel-Aviv University
Abstract. We consider the Gap Hamming Distance problem in communication complexity. Here, Alice receives an $n$-bit string $x$, and Bob receives an $n$-bit string $y$. They are promised that the Hamming distance between x and y is either at least $n/2+\sqrt{n}$ or at most $n/2-\sqrt{n}$, and their goal is to decide which is the case.
We show a tight lower bound of \Omega(n) on the randomized communication complexity of the problem. The bound is based on a new geometric statement regarding correlations in Gaussian space, related to a result of C. Borell from 1985.
Based on joint work with Amit Chakrabarti.
Play video
January 25, 2010 (Monday)
An Improved Competitive Algorithm for Reordering Buffer Management
Noa Elgrabli, Technion
Abstract. In the reordering buffer management problem, a stream of colored items arrives at a service station equipped with a buffer that has a limited storage capacity of $k$ items. The buffer is used to permute the input stream (in a limited way) in order to
minimize the context switching cost, which is incurred whenever there is a color change between consecutive items served. In our work we design and analyze an on-line reordering buffer management algorithm with improved $O\left(\frac{\log k}{\log\log k}\right)$ competitive ratio for non-uniform costs. This improves on the best previous result (even for uniform costs) of Englert and Westermann (ICALP 2005) giving $O(\log k)$ competitive ratio, which was also the best (off-line) polynomial time approximation guarantee for this problem. Our analysis is based on an intricate dual fitting argument using a linear programming relaxation for the problem.
Joint work with Yuval Rabani
Jan 22, 2010
A nonlinear approach to dimension reduction
Robert Krauthgamer, Weizmann Institute
Abstract. The celebrated Johnson-Lindenstrauss lemma says that every n points in Euclidean space can be represented with O(log n) dimensions with only a minor distortion of pairwise distances. It has been conjectured that a much-improved dimension reduction representation is achievable for many interesting data sets, by bounding the target dimension in terms of the intrinsic dimensions of the data, e.g. by replacing the log(n) term with the doubling dimension. This question appears to be quite challenging, requiring new (nonlinear) embedding techniques.
We make progress towards resolving this question by presenting two dimension reduction theorems with similar flavor to the conjecture. For some intended applications, these results can serve as an alternative to the conjectured embedding.
Joint work with Lee-Ad Gottlieb.
Play video
Jan 15, 2010
Approximating the Asymmetric Traveling Salesman Problem
Michel Goemans, MIT
Abstract. The traveling salesman problem (TSP) --- the problem of finding the shortest tour through a set of cities --- is the prototypical combinatorial optimization problem. It comes in two flavors, the symmetric (STSP) and asymmetric (ATSP) versions; the latter corresponds to the setting in which the cost of traveling from A to B might differ from the cost from B to A. Whether one can efficiently find solutions in the asymmetric case that are within a constant factor of optimal has proved to be elusive despite considerable efforts, while the same question in the symmetric case has been settled for decades.
In this talk, I will describe a randomized algorithm for ATSP which delivers a solution within a factor $O(\log n/ \log\log n)$ of the optimum. This is the first approximation algorithm that breaks the logarithmic glass ceiling of approximability. The algorithm is somewhat reminiscent of Christofides' algorithm for the symmetric case. It first solves the classical Held-Karp linear programming relaxation of the problem, then generates a spanning tree T while disregarding the orientations of the arcs, and finally augments it into a directed Eulerian graph at minimum cost by solving a minimum cost flow problem. The cost of the final step depends on the thinness of the tree T. We show how to construct a suitable probability distribution on spanning trees which are sufficiently thin with high probability.
During the talk, our tour will bring us to visit polyhedral arguments, negative correlation, random spanning trees, maximum entropy distributions, approximate minimum cuts, Kirchhoff's matrix tree theorem, and several other concepts.
Play video