Solutions are posted.
Regarding Question #2: I referred to this as "Talagrand's Lemma", and that's how I personally think of it, having learned it first as Proposition 2.2 in Michel Talagrand's ingenious paper How much are increasing sets positively correlated?.
I think that, really, it should count as a folklore result. For example, in arithmetic combinatorics I think it's sometimes called Chang's Theorem. It is also the pith of the very technical proof of Bourgain's Theorem on the Fourier tails of boolean functions (not available online, but appearing in a paper of Khot and Naor). Bourgain uses a different proof, but doesn't quite derive it explicitly. The different proof relies on hypercontractivity and is extremely short:
Given , let be the degree-1 part of . Then by Plancherel, for any , by Holder. Use hypercontractivity on , noting that it has degree 1. One gets . On the other hand, we have where , since and . Putting it together, we get , which implies . Now we just need to choose to balance and ; the optimum choice is , and this completes the proof.
Wednesday, April 25, 2007
Tuesday, April 24, 2007
Randomized DT complexity of monotone graph properties
I posted the lecture notes for today; they contain a bit more information about Yao's Conjecture ( a monotone graph property ).
Edinburgh
Just got back from a workshop on Geometry and Algorithms in Edinburgh. Was there any Fourier analysis? But of course! What would geometry and algorithms be without it?
Unfortunately, though, it's hard to get it into one's talk in 50 minutes, since one has to cover the geometry and algorithms parts. Guy Kindler mentioned some Fourier analysis in connection with UGC-hardness for Max-Cut. Assaf Naor talked a bit about his upcoming magnum opus with Manor Mendel on "Metric Cotype" which will appear in the Annals of Math. That paper is pretty much all Fourier analysis for functions , but with . And Robi Krauthgamer came closest to actually doing some Fourier analysis in his slides, while talking about his work on the communication complexity and embeddability of Edit Distance. Unfortunately, his paper (joint with Alexandr Andoni) doesn't seem to be available yet.
I did my best to get there while talking about the parallel repetition/foams stuff I talked about at the ACO Seminar in March, but no dice. I do think the Fourier analysis we used in that result was pretty cute though; basically: (1) Talagrand's Lemma from Homework 5 Problem 2; and (2) the reversed form of the Hypercontractive Inequality, namely:
You know -- just like the usual inequality, but reversed.
Unfortunately, though, it's hard to get it into one's talk in 50 minutes, since one has to cover the geometry and algorithms parts. Guy Kindler mentioned some Fourier analysis in connection with UGC-hardness for Max-Cut. Assaf Naor talked a bit about his upcoming magnum opus with Manor Mendel on "Metric Cotype" which will appear in the Annals of Math. That paper is pretty much all Fourier analysis for functions , but with . And Robi Krauthgamer came closest to actually doing some Fourier analysis in his slides, while talking about his work on the communication complexity and embeddability of Edit Distance. Unfortunately, his paper (joint with Alexandr Andoni) doesn't seem to be available yet.
I did my best to get there while talking about the parallel repetition/foams stuff I talked about at the ACO Seminar in March, but no dice. I do think the Fourier analysis we used in that result was pretty cute though; basically: (1) Talagrand's Lemma from Homework 5 Problem 2; and (2) the reversed form of the Hypercontractive Inequality, namely:
If and , then so long as .
You know -- just like the usual inequality, but reversed.
Friday, April 20, 2007
Homework #5
Hope you enjoyed Avrim on Tuesday and a day off on Thursday.
Please post any questions or comments about Homework #5 here in the comments.
Please post any questions or comments about Homework #5 here in the comments.
Sunday, April 08, 2007
Homework #4 graded
The average was 35/40, with a high of 38. As usual, pretty much everybody knew what was going on.
The most popular questions seemed to be #1 and #2, even though those were definitely the longest (especially #1!). #5 is definitely the fastest, if you see how to do it.
The most popular questions seemed to be #1 and #2, even though those were definitely the longest (especially #1!). #5 is definitely the fastest, if you see how to do it.
Thursday, April 05, 2007
Geometry of correlated Gaussians
In class we were discussing the geometry of -correlated, -dimensional Gaussians; call them , .
We have that . This is the sum of very well-behaved independent random variables with mean and variance , so its distribution value will be with very high probability. (To be more accurate, say with probability at least .) Hence will be with high probability. Since is also distributed as a standard -dimensional Gaussian, the same is true of .
Now imagine is fixed and we choose to be -correlated to . Then , where the random variables are independent standard normals. I.e., the dot product is plus an independent one-dimensional normal, . Since with high probability, the dot product is with high probability.
So the cosine of the angle between and will be , and hence the angle will be with high probability (assuming is treated as a constant in ).
Now this doesn't really yet prove that and are distributed like a random pair on the surface of the -radius sphere with angle . We just know that their angle will be and they will both be on the sphere. We should really look at the distribution of . But this is just a scaled -dimensional Gaussian, so its distribution is spherically symmetric. I guess this more or less justifies the overall claim.
Actually, I believe that will be close to orthogonal to with high probability. The dot product between the two will be distributed like , which will probably be on the order of . However the product of the lengths of the two vectors will be like . So the cosine of the angle between them will be like , and the angle will be close to degrees.
We have that . This is the sum of very well-behaved independent random variables with mean and variance , so its distribution value will be with very high probability. (To be more accurate, say with probability at least .) Hence will be with high probability. Since is also distributed as a standard -dimensional Gaussian, the same is true of .
Now imagine is fixed and we choose to be -correlated to . Then , where the random variables are independent standard normals. I.e., the dot product is plus an independent one-dimensional normal, . Since with high probability, the dot product is with high probability.
So the cosine of the angle between and will be , and hence the angle will be with high probability (assuming is treated as a constant in ).
Now this doesn't really yet prove that and are distributed like a random pair on the surface of the -radius sphere with angle . We just know that their angle will be and they will both be on the sphere. We should really look at the distribution of . But this is just a scaled -dimensional Gaussian, so its distribution is spherically symmetric. I guess this more or less justifies the overall claim.
Actually, I believe that will be close to orthogonal to with high probability. The dot product between the two will be distributed like , which will probably be on the order of . However the product of the lengths of the two vectors will be like . So the cosine of the angle between them will be like , and the angle will be close to degrees.
Wednesday, April 04, 2007
Approximating not-nice functions by nice functions
In the last class I asserted that it was possible to approximate certain not-nice functions like and by -nice functions. For example, I asserted that for all and there exists a function which is -nice and approximates the -step-function in the following sense: for ; for ; and, for .
There was a question in class as to whether this could really be possible. Specifically, if is for all , and it's smooth, then it has all derivatives equal to for all . Shouldn't this make it everywhere?
Well, not so. It's true that for any , all derivatives are . But Taylor's theorem does not force such a function to be everywhere. Remember that Taylor (for smooth functions) just says that for any , for some .
For a concrete example, consider the function which is for , and is for . It's easy to see that is smooth on and it's not hard to check that its derivatives, of all orders, at (when approached from inside) are .
In fact, functions like (known as bump functions or mollifiers) are what one uses to create functions like -- essentially, by taking , where denotes the actual discontinuous step function, denotes convolution, and denotes a compressed version of , viz., .
There was a question in class as to whether this could really be possible. Specifically, if is for all , and it's smooth, then it has all derivatives equal to for all . Shouldn't this make it everywhere?
Well, not so. It's true that for any , all derivatives are . But Taylor's theorem does not force such a function to be everywhere. Remember that Taylor (for smooth functions) just says that for any , for some .
For a concrete example, consider the function which is for , and is for . It's easy to see that is smooth on and it's not hard to check that its derivatives, of all orders, at (when approached from inside) are .
In fact, functions like (known as bump functions or mollifiers) are what one uses to create functions like -- essentially, by taking , where denotes the actual discontinuous step function, denotes convolution, and denotes a compressed version of , viz., .
Subscribe to:
Posts (Atom)