May 24, 2013

# Theory Lunch – Spring 2012

April 27, 2012

Title: Multicommodity Flows and Cuts in Polymatroidal Networks
Chandra Chekuri, UIUC

[Complete Video]

Abstract:

The well-known maxflow-mincut theorem for single-commodity flows in directed networks is of fundamental importance in combinatorial optimization. Flow-cut equivalence does not hold in the multicommodity case. A substantial amount of research in the algorithms community has focused on understanding the gap between multicommodity flows and associated cuts. Poly-logarithmic gap results have been established in various settings via tools from network decomposition and metric embeddings.

In this talk we describe work that obtains flow-cut gap results in *polymatroidal* networks. In these networks there are submodular capacity constraints on the edges incident to a node. Such networks were introduced by Lawler & Martel and Hassin in the single-commodity setting and are closely related to the submodular flow model of Edmonds and Giles. The well-known maxflow-mincut theorem for a single- commodity holds in polymatroidal networks, however, the multicommodity case has not been previously explored. Our work is primarily motivated by applications to information flow in wireless networks although the technical results we obtain are of independent interest as they generalize known results in standard networks including
node-capacitated undirected networks. The results highlight the utility
of line embeddings suggested by Matousek and Rabinovich. We use them to
round the dual of the flow-relaxation rewritten with a convex objective function obtained from the Lovasz-extension of a submodular function.

Joint work with Sreeram Kannan, Adnan Raja and Pramod Viswanath.
Paper available at http://arxiv.org/abs/1110.6832

April 20, 2012

Nearly Complete Graphs Decomposable into Large Induced Matchings and their Applications

Ankur Moitra, IAS

Dense graphs consisting of large pairwise edge disjoint induced matchings have found several applications in combinatorics, complexity theory and information theory. Call a graph $G=(V,E)$ an $(r,t)$-Ruzsa-Szemeredi graph ($(r,t)$-RS graph, for short) if its set of edges consists of $t$ pairwise disjoint induced matchings, each of size $r$. Graphs of this type are useful when both $r$ and $t$ are relatively large as a function of the number of vertices $N$.

The first surprising construction was given by Ruzsa and Szemeredi, who applied a result of Behrend about the existence of dense subsets of $\{1,2,…,\Theta(N)\}$ containing no 3-term arithmetic progressions. Since then, a number of alternative constructions have been given — relying tools from additive number theory, coding theory and low degree representations of Boolean functions. Additionally, proofs of non-existence (in some ranges of parameters) rely on the regularity lemma. Interestingly, none of these constructions address the question of whether or not an $(r,t)$-RS graph can simultaneously have positive density and yet be an edge disjoint union of polynomially large induced matchings. And indeed, it has been independently conjectured (in quite different contexts) that such graphs do not exist.

We describe two constructions of (very) dense graphs which are edge disjoint unions of large induced matchings. The first construction exhibits graphs on $N$ vertices with ${N \choose 2}-o(N2)$ edges, which can be decomposed into pairwise disjoint induced matchings, each of (nearly-linear) size $N^{1-o(1)}$. The second construction provides a covering of all edges of the complete graph $K_N$ by two graphs, each being the edge disjoint union of at most $N^{2-\delta}$ induced matchings, where $\delta > 0.076$. Additionally, we give applications of these results to communicating over a shared multi-channel, linearity testing and a candidate rounding algorithm for the directed Steiner tree problem.

This is based on joint work with Noga Alon and Benny Sudakov.

April 13, 2012

Hypercontractivity, Sum-of-Squares Proofs, and their Applications

[Complete Video]

David Steurer, Microsoft Research

We study the computational complexity of approximating the 2-to-q norm of
linear operators for q > 2, as well as connections of this question to
issues arising in quantum information theory and in the context of Khot's
Unique Games Conjecture (UGC). We show the following:

- A constant level of the Sum-of-Squares / Lasserre semidefinite
programming hierarchy certifies the unsatisfiability of Unique Games
instances based on the noisy cube or the short code. The previous best
upper bound on the level of these instances was exp((log n)^{Omega(1)})
(for the short code). This result also separates the Sum-of-Squares
hierarchy from weaker hierarchies that were known to require omega(1)
levels for these instances. A key ingredient of the result is that the
Sum-of-Squares hierarchy can certify bounds on the 2-to-4 norm of the
projector into low-degree polynomials over the Boolean cube.

- We characterize small-set expansion of graphs in terms of 2-to-q norms
of projectors into the span of their top eigenvectors. As a corollary,
achieving certain "good approximations" to the 2-to-q norm is at least
as hard as refuting Small-Set Expansion Hypothesis — a close relative
of the UGC. On the one hand, we show that such good approximations can
be obtained in time exp(n^{2/q}), generalizing the known subexponential-time
algorithm for Small-Set Expansion. On the other hand, we show that good
approximations for the 2-to-4 norm require quasipolynomial time assuming
that 3-SAT requires exponential time. The latter result builds on a
connection to the quantum separability problem.

March 30, 2012

Title: Approximating Graphic TSP by Matchings
Ola Svensson

[Complete Video]

Abstract:

We will discuss a framework for approximating the metric TSP based on
a novel use of matchings. Traditionally, matchings have been used to
add edges in order to make a given graph Eulerian, whereas our
approach also allows for the removal of certain edges leading to a
decreased cost.

We then overview the exciting and rapid development for TSP on
graphic metrics following this approach: our 1.461-approximation
algorithm, the better analysis by Mucha'11 yielding an approximation
guarantee of 1.44, and the recent development by Sevo & Vygen'12 who
gave a 1.4-approximation algorithm.

Finally, we point out some interesting open problems where our
techniques currently fall short from generalizing to more general
metrics.

This is based on joint work with Tobias Moemke.

March 16, 2012

Jelani Nelson

Title: Sparser Johnson-Lindenstrauss Transforms [Complete Video]

Abstract:

The Johnson-Lindenstrauss (JL) lemma (1984) states that any n points in d-dimensional Euclidean space can be embedded into k = O((log n)/eps2) dimensions so that all pairwise distances are preserved up to 1+/-eps. Furthermore, this embedding can be achieved via a linear mapping. The JL lemma is a useful tool for speeding up solutions to several high-dimensional problems: closest pair, nearest neighbor, diameter, minimum spanning tree, etc. It also speeds up some clustering and string processing algorithms, reduces the amount of storage required to store a dataset, and can be used to reduce memory required for numerical linear algebra problems such as linear regression and low rank approximation.

The original proofs of the JL lemma let the linear mapping be specified by a random dense k x d matrix (e.g. i.i.d. Gaussian entries). Thus, performing an embedding requires dense matrix-vector multiplication. We give a construction of linear mappings for JL in which only a subconstant fraction of the embedding matrix is non-zero, regardless of how eps and n are related, thus always speeding up the embedding time. Previous constructions only achieved sparse embedding matrices for 1/eps >> log n.

This is joint work with Daniel Kane (Stanford).

March 2, 2012

Brendan Lucier , Microsoft Research Cambridge

The Limits of Black-Box Mechanism Design
[Complete Video]

A single-parameter mechanism design problem is an optimization problem with a game-theoretic twist: the input values are held by rational agents, who may misrepresent the input for their own gain. To combat this manipulation one might ask for an algorithm that is incentive compatible, meaning that it is in the best interest of each agent to report its value truthfully. In general, however, the impact of the truthfulness requirement on the approximation factors achievable by polynomial-time algorithms for single-parameter problems is unknown.

We consider the problem of converting an arbitrary approximation algorithm for a single-parameter optimization problem into a computationally efficient truthful mechanism. We ask for reductions that are "black-box", meaning that they require only oracle access to the given algorithm and in particular do not require explicit knowledge of the problem constraints.

We show that such a black-box transformation is possible for social welfare problems in the Bayesian setting, where the inputs are drawn from commonly-known distributions. In the non-Bayesian setting, however, we show that no such general black-box reduction exists, even for single-parameter problems. Specifically, a black-box reduction for the social welfare objective is not possible if the resulting mechanism is required to be truthful in expectation and to preserve the worst-case approximation ratio of the algorithm to within a subpolynomial factor. Further, for other objectives such as makespan, no black-box reduction is possible even if we only require Bayesian truthfulness and an average-case performance guarantee.

Joint work with Shuchi Chawla and Nicole Immorlica

February 24, 2012

Ravi Kannan, Microsoft Research

Title: k-MEANS REVISITED
[Complete Video]

Ravi Kannan, MSR

In many applications, fairly fast clustering algorithms seem to yield the desired solution. Theoretically, two types of assumptions lead to provably fast algorithms for clustering: (i) stochastic (mixture) models of data and (ii) uniqueness of optimal solution even under perturbations of data. We show that under an assumption weaker than either of these, Lloyd's (k-means) algorithm converges to the correct solution. We apply the result to the planted clique problem.

Joint work with Amit Kumar.

February 17, 2012

Andrew Drucker (MIT)

Title:   On the Hardness of Compressing an AND of SAT Instances
[Complete Video]

Andrew Drucker, MIT

Abstract:

Given an instance of a decision problem that is too hard to solve outright, we may aim for the more limited goal of compressing that instance into a smaller, equivalent one.

As one example, suppose we're given a collection of m^100 Boolean formulas {F_i} on disjoint variable-sets, each formula of length m; we'd like to know if at least one is satisfiable. Can we efficiently construct a new formula G of length, say, m^3, such that G is satisfiability-equivalent to the OR of the F_i's? This seems unlikely, but doesn't appear to contradict the hardness of SAT.

Bodlaender et al. (ICALP 2008) showed that the intractability of this "OR-compression" task would have important consequences in the study of "fixed-parameter-tractable" problems. They showed something similar for the corresponding "AND-compression" task, with a different set of consequences in this case. Motivated by this, Fortnow and Santhanam (STOC 2008) showed that efficient OR-compression is not possible unless NP is in coNP/poly, an unlikely consequence. But the case of AND-compression remained mysterious.

We show that efficient AND-compression would also imply that NP is in coNP/poly. To prove this result (and some extensions), we interpret any compression scheme as a communication channel, and exploit its bottleneck: we give a new method to "disguise" information being fed into a compressive mapping

February 10, 2012

Afonso Bandeira, Princeton Department of Mathematics

Title: A Cheeger Inequality for $SO(3)$ Synchronization
[Complete Video]

Afonso Bandeira

joint work with Amit Singer (Princeton) and Daniel Spielman (Yale)

Abstract: The $SO(3)$ synchronization problem consists of estimating a set of unknown rotations $O_i$ from noisy measurements of a subset of the pairwise relative rotations $O_iO_j^{-1}$. We show an $SO(3)$ Cheeger inequality that relates a measure of how well it is possible to solve the $SO(3)$ synchronization with the spectra of an operator, the graph Connection Laplacian. We also show how this inequality provides a worst case performance guarantee for a spectral method, that we propose, to solve $SO(3)$ Synchronization.