In class, I handwaved over the proof that we can efficiently estimate . The proof is very easy:
.
Now we use a trick: Pick two copies of independently:
... .
And this expectation can be efficiently estimated empirically, since we can sample the random variable inside the expectation, and this r.v. is bounded in . (I guess officially what I keep using here is "Hoeffding's Bound".)
--
Also: at the end of class, when showing how to break the one-way permutation given a PPT algorithm for guessing the hard-core predicate, we had that if is deterministic, we get that for a fraction of 's, . If is not deterministic, let denote the function which on input equals , where the expectation is over 's randomness. Now is a function with range in , so we can apply Goldreich-Levin to it...
Except not quite, since in GL we assume we actually have access to , whereas really, we only have access to a -valued randomized function whose expectation is . But I hope you can convince yourself that that's good enough; basically, we just have to throw in an expectation over 's random coins in the derivation in the first part of this post.
Tuesday, February 06, 2007
Subscribe to:
Post Comments (Atom)
2 comments:
1) It seems that the algorithm in the proof of Proposition 2.4 is trying to learn X_s, rather than f.
2) How would knowing the values of f at e_i for all i helps to recover f for any \epsilon-close (to X_s) function f?
My other post is about lecture 7. Sorry I forgot to mention.
Post a Comment