Tuesday, February 06, 2007

Sampling

In class, I handwaved over the proof that we can efficiently estimate $E[F_{S \subseteq I}(x)^2]$. The proof is very easy:

$E_x[F_{S \subseteq I}(x)^2] = E_x[\hat{f_x}(S)^2]$ $= E_x[\hat{f_x}(S)^2] = E_x[(E_w[f_x(w)w_S])^2]$.

Now we use a trick: Pick two copies of $w$ independently:

... $ = E_x[E_{w,w'}[f_x(w) w_S f_x(w') w'_S]]$ $= E_{x,w,w'}[f(w,x)w_S f(w',x) w'_S]$.

And this expectation can be efficiently estimated empirically, since we can sample the random variable inside the expectation, and this r.v. is bounded in $[-1,1]$. (I guess officially what I keep using here is "Hoeffding's Bound".)

--

Also: at the end of class, when showing how to break the one-way permutation $f$ given a PPT algorithm $A$ for guessing the hard-core predicate, we had that if $A$ is deterministic, we get that for a $2\gamma$ fraction of $x$'s, $\Pr_r[A(f(x), r) = x \cdot r] \geq 1/2 + \gamma$. If $A$ is not deterministic, let $\mathcal{A}$ denote the function which on input $r$ equals $E_{A}[A(f(x), r)]$, where the expectation is over $A$'s randomness. Now $\mathcal{A}$ is a function with range in $[-1,1]$, so we can apply Goldreich-Levin to it...

Except not quite, since in GL we assume we actually have access to $\mathcal{A}$, whereas really, we only have access to a $\{-1,1\}$-valued randomized function whose expectation is $\mathcal{A}$. But I hope you can convince yourself that that's good enough; basically, we just have to throw in an expectation over $A$'s random coins in the derivation in the first part of this post.

2 comments:

b said...

1) It seems that the algorithm in the proof of Proposition 2.4 is trying to learn X_s, rather than f.

2) How would knowing the values of f at e_i for all i helps to recover f for any \epsilon-close (to X_s) function f?

b said...

My other post is about lecture 7. Sorry I forgot to mention.