This is a tentative program and subject to change.
Talks are in Room 101 in the Friend Center auditorium, lunch is located in the Friend center convocation room (room 113). Breakout rooms available are Friend 004 and 006.
Tue, Aug 25: Boolean complexity
Organizers: Scott Aaronson, Alexander Razborov
11:20 – 11:40 Break
11:40 – 12:30 : Benjamin Rossman, The monotone complexity of k-clique on random graphs ben_rossman_slides
12:30 – 14:30 Lunch
15:20 – 17:00 Break
Wed, Aug 26: Arithmetic computation
Organizers: Ran Raz, Amir Shpilka.
11:20 – 11:40 Break
12:30 – 14:30: Lunch
16:30 – 17:00 Break
18:30 – 21:00: Workshop Banquet, Palmer House
Thu, Aug 27: Proof complexity
Organizers: Toni Pitassi, Pavel Pudlak
12:30- 14:00 Lunch
14:45 – 15:00 Break
15:00 – 16:15 Short talks (25 minutes each)
Neil Thapen, TBA
16:15 – 16:30 Break
16:30 – 17:45 Short talks (25 minutes each
17:45 – 20:30 Dinner
20:30 – 21:30 Panel Discussion, Will proof complexity help to solve P vs. NP?”
Sam Buss (moderator)
Fri, Aug 28: Computational Learning theory
Organiziers: Avrim Blum, Rocco Servedio
09:20 – 09:30 Opening remarks
11:20 – 11: 40 Break
12:30 Lunch break
16:20 – 16:40 Break
Sat, Aug 29: Pseudorandomness & Derandomization
Organizers: Valentine Kabanets, Luca Trevisan
11:00 – 11:15 Break
12:00 – 14:00 Lunch
14:45 – 15:15 Break
End of program
Title: On P vs. NP, Geometric Complexity
Theory, Explicit Proofs and the Complexity Barrier
This talk will give its informal overview of the complexity barrier and its role in geometric complexity theory (GCT), an approach to the P vs. NP and related problems. The relevant paper (with the same title as the talk) is available on the web at the speaker’s personal home page.
Title: Computation and Optimization in Machine Learning
Abstract: Many problems in statistical machine learning involve non-convex objective functions that are difficult to optimize. A frequently employed approach is to solve a convex approximation (or relaxation) of the original problem, which raises the question about the quality of such relaxation methods. That is, whether the solution of an approximation is close to the solution of the original problem. A different approach is to use traditional numerical optimization procedures that directly find a local minimum of the nonconvex problem. Again, an important question is the quality of the local minimum being found.
I will discuss some recent results for both approaches as well as some future directions. In the first part, I will discuss convex relaxation methods, and the emphasis is on the effectiveness of convex relaxation. The second part discusses local optimization methods; again, the emphasis is the on effectiveness of such procedures.
Title: Learning and Smoothed Analysis
We give a new model of learning motivated by smoothed analysis (Spielman and Teng, 2001). In this model, we analyze two new algorithms, for PAC-learning DNFs and agnostically learning decision trees, from random examples drawn from a constant-bounded product distributions. These two problems had previously been solved using membership queries (Jackson, 1995; Gopalan et al, 2005). Our analysis demonstrates that the “heavy” Fourier coefficients of a DNF alone suffice to recover the DNF. We also show that a structural property of the Fourier spectrum of any boolean function holds over “typical” product distributions.
Joint work with Alex Samorodnitsky and Shang-Hua Teng
Title: Learning, Polynomials, and Circuit Lower Bounds.
In 1989, Linial, Mansour, and Nisan pioneered the “Fourier method” in computational learning theory by reducing the problem of uniform-distribution learning to the problem of approximating concept classes by real polynomials. Additionally, they gave applications to proving circuit lower bounds for constant depth circuits. In this two-part talk, I will continue these themes and describe 1) how real approximation theory is still a fundamental problem in learning theory (in particular, by examining the complexity of empirical risk minimization over R^n) and 2) why– in a precise sense– learning algorithms actually imply circuit lower bounds.
Title: Approximate Clustering without the Approximation
There has been substantial work on approximation algorithms for clustering
data under distance-based objective functions such as k-median, k-means,
and min-sum objectives. This work is fueled in part by the hope that
approximating these objectives well will indeed yield more accurate
solutions. That is, for problems such as clustering proteins by function,
or clustering images by subject, there is some unknown correct “target”
clustering and the implicit assumption is that clusterings that are
approximately optimal in terms of these distance-based measures are also
approximately correct in terms of error with respect to the target. In
this work we show that if we make this implicit assumption explicit —
that is, if we assume that any c-approximation to the given clustering
objective Phi is epsilon-close to the target — then we can produce
clusterings that are O(epsilon)-close to the target, even for values c for
which obtaining a c-approximation is NP-hard. In particular, for the
k-median, k-means, and min-sum objectives, we show that we can achieve
this guarantee for any constant c > 1.
Our results show how by explicitly considering the alignment between the
objective function used and the true underlying clustering goals, one can
bypass computational barriers and perform as if these objectives were
computationally substantially easier.
Title: Getting around Barriers in Learning Theory
This talk is about a collection of examples where a barrier in learning theory has fallen, how it fell, and some discussion about how we can get around remaining barriers. I plan to discuss and compare problems in sample complexity, computational complexity, and complex learning problems.