Tuesday, February 06, 2007

Sampling

In class, I handwaved over the proof that we can efficiently estimate E[FSI(x)2]. The proof is very easy:

Ex[FSI(x)2]=Ex[fx^(S)2] =Ex[fx^(S)2]=Ex[(Ew[fx(w)wS])2].

Now we use a trick: Pick two copies of w independently:

... =Ex[Ew,wʹ[fx(w)wSfx(wʹ)wʹS]] =Ex,w,wʹ[f(w,x)wSf(wʹ,x)wʹS].

And this expectation can be efficiently estimated empirically, since we can sample the random variable inside the expectation, and this r.v. is bounded in [-1,1]. (I guess officially what I keep using here is "Hoeffding's Bound".)

--

Also: at the end of class, when showing how to break the one-way permutation f given a PPT algorithm A for guessing the hard-core predicate, we had that if A is deterministic, we get that for a 2γ fraction of x's, Prr[A(f(x),r)=xr]1/2+γ. If A is not deterministic, let denote the function which on input r equals EA[A(f(x),r)], where the expectation is over A's randomness. Now is a function with range in [-1,1], so we can apply Goldreich-Levin to it...

Except not quite, since in GL we assume we actually have access to , whereas really, we only have access to a {-1,1}-valued randomized function whose expectation is . But I hope you can convince yourself that that's good enough; basically, we just have to throw in an expectation over A's random coins in the derivation in the first part of this post.

2 comments:

b said...

1) It seems that the algorithm in the proof of Proposition 2.4 is trying to learn X_s, rather than f.

2) How would knowing the values of f at e_i for all i helps to recover f for any \epsilon-close (to X_s) function f?

b said...

My other post is about lecture 7. Sorry I forgot to mention.