Dear 6.841 students,
As part of your course participation requirement, please post/send in
your comments on the material covered in the 3 lectures on PCPs and the 2
lectures on Average-case complexity.
Here are the instructions for sending in your comments:
The idea is to send a brief overview *in your own words*
of what you thought was covered. Follow this up with a few words
saying what you liked and what you did not like. Any other comments,
questions, topics to be clarified would also be welcome. Your
comments could range from anywhere from a couple of paras to a whole
volume! Post your comments on the blog at http://mit6841.blogspot.com/
(or email them to Swastik).
Wednesday, May 2, 2007
Sunday, March 11, 2007
When there are many satisfying assignments ...
My solution to Problem 4 in Problem Set 2 is an adaptation of Hirsch'98. This solution is somewhat dissimilar to Sipser-Saxe-Furst (contrary to what the hint in the Problem Set suggests) and so I wonder if someone has a different (easier) solution.
On the other hand, reading Hirsch's paper raised an interesting question. As a corollary of this exercise we know that formulas with bounded-width clauses and many satisfying assignments have short (constant size) partial satisfying assignments. It is an open question (according to Hirsch) whether a similar procedure exists for CNF formulas with unbounded-width clauses.
To suggest that maybe this is not possible, (in Remark 6 of Hirsch'98) Hirsch constructs a CNF for which more than half of all assignments are satisfactory and yet for every partial assignment on SQRT(n) variables there exists a corresponding complete assignment which is unsatisying.
Hirsch, however, fails to point out that the constructed CNF formula might (and most likely will) be exponential in size. This raises a new, more concrete, open question: Is there a way to modify the present algorithm such that it takes advantage of bounds on formula length in addition to clause-width that solves the problem for CNFs with unbounded-width clauses in sub-exponential time?
On the other hand, reading Hirsch's paper raised an interesting question. As a corollary of this exercise we know that formulas with bounded-width clauses and many satisfying assignments have short (constant size) partial satisfying assignments. It is an open question (according to Hirsch) whether a similar procedure exists for CNF formulas with unbounded-width clauses.
To suggest that maybe this is not possible, (in Remark 6 of Hirsch'98) Hirsch constructs a CNF for which more than half of all assignments are satisfactory and yet for every partial assignment on SQRT(n) variables there exists a corresponding complete assignment which is unsatisying.
Hirsch, however, fails to point out that the constructed CNF formula might (and most likely will) be exponential in size. This raises a new, more concrete, open question: Is there a way to modify the present algorithm such that it takes advantage of bounds on formula length in addition to clause-width that solves the problem for CNFs with unbounded-width clauses in sub-exponential time?
Friday, March 2, 2007
Looking for Partners for Projects?
If you are looking for a partner for your project, post a comment below or look at the other comments.
Incorrect attribution for rank bound
In lecture on Wednesday, 2/28/2007, I asserted that the
communication complexity of a function f is lower bounded
by the log of the rank of the associated matrix M_f.
I then attributed this to Yao.
The attribution was incorrect. Nick Harvey pointed out
to me that the source of this lower bound is Mehlhorn
and Schmidt, STOC 1982. (Thanks to Nick for correcting
me.)
Nick also pointed out that my explanation of the
"tiling lower bound" was not clear (I agree). I
will try to post additional notes to clarify this.
Also I should have mentioned that the book "Communication
Complexity" by Kushilevitz and Nisan is an excellent place
to read more about the topic.
- Madhu
communication complexity of a function f is lower bounded
by the log of the rank of the associated matrix M_f.
I then attributed this to Yao.
The attribution was incorrect. Nick Harvey pointed out
to me that the source of this lower bound is Mehlhorn
and Schmidt, STOC 1982. (Thanks to Nick for correcting
me.)
Nick also pointed out that my explanation of the
"tiling lower bound" was not clear (I agree). I
will try to post additional notes to clarify this.
Also I should have mentioned that the book "Communication
Complexity" by Kushilevitz and Nisan is an excellent place
to read more about the topic.
- Madhu
Switching Lemmas
Petar asked me some questions about the switching lemma
that reminded me that I had not provided enough context
or background about this lemma to you. (By the way at the
bottom of this post I've added a glossary of terms used ...
that may be helpful to scan before reading the rest of this
post.)
The lemma, as proven in class, stated that for
every poly p, there exists a constant C = C_p
such that a DNF formula on n variables with
size s=p(n) can be converted into
a CNF formula on C variables (and hence of size ~ 2^C)
by hitting it with a restriction that leaves a variable
unset with probability sqrt{n}.
The proof went in two stages: The first used the
restrictions to reduce the length of terms, to say c. The
second stage then applied only to c-DNF formulae.
In the stronger switching lemmas (e.g., Hastad's) one
assumes that the formula is already a c(n)-DNF formula
and shows that this can be converted to a C(n)-CNF formula
after the restriction. For example, one may start with
a log^2 n-DNF formula and prove that after the switch
it is a log^3 n CNF formula ...
Several points to note here:
1. One does not assume a size bound on the DNF formula
explicitly. But a bound is implicit anyway, since the
length is at most n^{c(n)}.
2. The lemma does not prove explicitly that the final
function is a function of a small number of variables
... however your problem set 2 (problem 3) will show why
the function does't depend on a number of variables that
is function only of c(n) and C(n) and not n directly.
3. The proof of the stronger versions of the switching
lemma is much harder ... one needs to argue that the
c(n)-DNF formula does not have any maxterms of length
greater than C(n)+1 after it is hit with a restriction.
I personally like Hastad's proof (from his Ph.D. thesis)
which I found intuitive, though it has some tough
calculations. Razborov's proof is more subtle,
shorter but less intuitive to me. Paul Beame's survey on
switching lemma's is probably a great source to read all
this from (I haven't read it yet ... thats why its on
the list of projects!).
Petar also pointed to a proof in the book on Kolmogorov
Complexity by Li and Vitanyi which rephrases Razborov's
proof of Hastad's switching lemma. The proof uses the
language of "incompressibility", which roughly says that
there is no way to compress 2^k + 1 strings into k bits.
Using "incompressibility" in your proofs versus
"counting + pigeonhole principle" is a matter of aesthetics
(sort of like using probabilities, versus cardinality of sets).
If the proof suits you better you are welcome to it, though
often there is a line-to-line translation without using the
phrase "incompressible".
I ought to add references here ... but am tiring myselves out.
So will do so later. (Or someone could do so as a comment.)
- Madhu
Glossary:
A literal is a variable or its negation.
A term is a conjunction of literals.
A clause is a disjunction of literals.
A k-DNF has terms of length at most k (at most k literals per term);
An l-CNF has clauses of length at most l.
A term (l1 and l2 and ... lk) is called a minterm of a function
f, if setting l1 = l2 = ... lk = 1 sets f to 1, and {l1 ... lk}
is minimal with respect to this property.
A maxterm is a clause (l1 or l2 .. or lk) with the property that
setting all literals to false falsifies the function, and
{l1 ... lk} is minimal with respect to this property.
that reminded me that I had not provided enough context
or background about this lemma to you. (By the way at the
bottom of this post I've added a glossary of terms used ...
that may be helpful to scan before reading the rest of this
post.)
The lemma, as proven in class, stated that for
every poly p, there exists a constant C = C_p
such that a DNF formula on n variables with
size s=p(n) can be converted into
a CNF formula on C variables (and hence of size ~ 2^C)
by hitting it with a restriction that leaves a variable
unset with probability sqrt{n}.
The proof went in two stages: The first used the
restrictions to reduce the length of terms, to say c. The
second stage then applied only to c-DNF formulae.
In the stronger switching lemmas (e.g., Hastad's) one
assumes that the formula is already a c(n)-DNF formula
and shows that this can be converted to a C(n)-CNF formula
after the restriction. For example, one may start with
a log^2 n-DNF formula and prove that after the switch
it is a log^3 n CNF formula ...
Several points to note here:
1. One does not assume a size bound on the DNF formula
explicitly. But a bound is implicit anyway, since the
length is at most n^{c(n)}.
2. The lemma does not prove explicitly that the final
function is a function of a small number of variables
... however your problem set 2 (problem 3) will show why
the function does't depend on a number of variables that
is function only of c(n) and C(n) and not n directly.
3. The proof of the stronger versions of the switching
lemma is much harder ... one needs to argue that the
c(n)-DNF formula does not have any maxterms of length
greater than C(n)+1 after it is hit with a restriction.
I personally like Hastad's proof (from his Ph.D. thesis)
which I found intuitive, though it has some tough
calculations. Razborov's proof is more subtle,
shorter but less intuitive to me. Paul Beame's survey on
switching lemma's is probably a great source to read all
this from (I haven't read it yet ... thats why its on
the list of projects!).
Petar also pointed to a proof in the book on Kolmogorov
Complexity by Li and Vitanyi which rephrases Razborov's
proof of Hastad's switching lemma. The proof uses the
language of "incompressibility", which roughly says that
there is no way to compress 2^k + 1 strings into k bits.
Using "incompressibility" in your proofs versus
"counting + pigeonhole principle" is a matter of aesthetics
(sort of like using probabilities, versus cardinality of sets).
If the proof suits you better you are welcome to it, though
often there is a line-to-line translation without using the
phrase "incompressible".
I ought to add references here ... but am tiring myselves out.
So will do so later. (Or someone could do so as a comment.)
- Madhu
Glossary:
A literal is a variable or its negation.
A term is a conjunction of literals.
A clause is a disjunction of literals.
A k-DNF has terms of length at most k (at most k literals per term);
An l-CNF has clauses of length at most l.
A term (l1 and l2 and ... lk) is called a minterm of a function
f, if setting l1 = l2 = ... lk = 1 sets f to 1, and {l1 ... lk}
is minimal with respect to this property.
A maxterm is a clause (l1 or l2 .. or lk) with the property that
setting all literals to false falsifies the function, and
{l1 ... lk} is minimal with respect to this property.
Friday, February 16, 2007
Welcome to MIT 6841 Blog
Hi All
Welcome to the Blog site for MIT 6.841.
We'll use this site to ask/answer queries about lectures.
If you post questions here, please also email the staff
(madhu and swastik).
Take care
- Madhu & Swastik.
Welcome to the Blog site for MIT 6.841.
We'll use this site to ask/answer queries about lectures.
If you post questions here, please also email the staff
(madhu and swastik).
Take care
- Madhu & Swastik.
Subscribe to:
Posts (Atom)