Thursday, February 01, 2007

Hardness via Unique Games Conjecture

Hey folks -- since today's lecture was pretty technical, feel free to ask for clarifications in the comments.

The main takeaway is that if you want to prove a strong hardness-of-approximation result for your favorite constraint satisfaction problem, all you need to do (assuming UGC) is design a strong "dictatorship vs. quasirandom" function tester, where the tests the tester uses are the same as the constraints you're trying to prove hardness for.

E.g., much later in the term we will give a function tester $T$ which makes $2$ queries $x, y \in \{-1,1\}^n$ and then checks that $f(x) \neq f(y)$. We will show that $T$ accepts all dictators with probability at least $.9001$ and rejects all $(.0001, .0001)$-quasirandom functions with probability at most $.7999$.

As a consequence, we get (assuming UGC) that given a CSP over variables $v_1, ... v_n$ where all the constraints are of the form $v_i \neq v_j$, it is NP-hard to distinguish the case that there is an assignment satisfying at least a $.9$ fraction of the constraints and the case that every assignment satisfies at most a $.8$ fraction. Since CSPs with $\neq$ constraints are the same as the "Max-Cut" problem, this shows that giving a $.8/.9 = .8888$ approximation for Max-Cut is NP-hard.


As mentioned, on Tuesday we'll move on to the the next segment of the course, Learning. Also, homework #2 will come out. Have a good weekend!


Ryan said...

Since nobody has asked anything yet, I'll ask myself some questions.

1. What's going on in this proof? Can you remind me?

2. What do you mean when you say you prove c - eta vs. s + eta hardness for CSPs of "the same type" as the tester?

3. You construct all these labeling in the proof -- can you actually do this in polynomial time?

Ryan said...

Ryan 2:03pm: Good questions.

1. The idea is that we assume (UGC) that given a "constraint graph G" with permutation constraints over [k] on the edges, it's really really hard to find good labelings, even in nearly-satisfiable instances. We want to show that given a huge function tester TT composed of many little "dictator vs. quasirandom tests T", it's hard to find functions that make TT pass with probability > s, even assuming there are functions that make TT pass with probability > c.

To that end, we construct a polytime reduction algorithm R that takes a unique-constraint graph G and spits out a huge function tester TT = R(G) with T-subtests. We need to *prove* that when G is nearly satisfiable then there are functions making TT pass with probability at least c (-ish) and that when G is barely satisfiable, then no functions make TT pass with probability > s (-ish).

If we can do this, this indeed shows distinguishing val >~ c from val ~< s is hard for big testers with T-subtests. Because if you had some magic polytime algorithm M that could make this distinction then you could also distinguish between almost-satisfiable and barely-satisfiable unique-constraint instances: Given such a G, apply R, and then apply M.


2. Suppose the "dictator-vs.-quasirandom" function tester T we are given makes 4 queries and then either tests XOR(a,b,c,d) or NAE(a,b,c,d) or AND(-a,-b,c,d). Then we prove that *given* a CSP over n variables in which all the constraints are either of the form XOR(v1,v2,v3,v4) or NAE(v1,v2,v3,v4) or AND(-v1,-v2,v3,v4), it's NP-hard to find assignments to the v's that satisfy a lot of the constraints. Specifically, there is no polytime algorithm which outputs YES on all >(c-eta)-satisfiable instances and NO on all <(s+eta)-satisfiable instances (unless P = NP.)


3. You don't have to. All you have to do is show that the *reduction* R is polynomial time. (It is -- actually, it's *linear* time in n, the number of vertices in G; it's exponential time in k, the number of labels for G, but who cares, since k is constant.) All of the construction of labelings is *only for the proof* of the reductions property. The reduction itself has nothing to do with labelings.

Anonymous said...

The lecture note (6) is very tough to understand.