I've read the paper fairly closely, and it mostly seems like the author is hiding a conflict-driven search in ill-stated data structures, which allow him to perform a faulty analysis of the runtime of his algorithm.<p>I've implemented a SAT solver and read the literature extensively. This paper is not up the standards of clarity imposed by that literature, see, eg, "Efficient Conflict Driven Learning in a Boolean Satisfiability Solver" (Zhang, available for free online). There is a world of difference in the clarity of presentation between these two papers. There might be an English language barrier problem operating, I don't know.<p>If the author did some work to polish his presentation and state definitions more clearly, as well as submit his SAT solver to well know competitions, (<a href="http://www.satcompetition.org/2011/" rel="nofollow">http://www.satcompetition.org/2011/</a>), I'm sure he could get some attention from the relevant people. Given how clear it looks right now, I'm too busy with my own research to try and pull out the hidden conflict-driven algorithm that I suspect exists in this paper, as it would be very time-consuming for little expected gain on my end.<p>If his algorithm beats the pants off all the others in SAT 2011, well, then I'd get right down to work.<p>Homework for someone who has some time: download his code and make it run on SAT 2010. Compare to other algorithms from that competition. Not, of course, a definitive test, but it it performs even in the middle of the pack, then you'll know it is worth a closer look.
I'm not at all an expert on this material, but some random points to get people started:<p>0. This guy looks orders of magnitude less looney than the usual P=NP prover. I hope someone who knows this material well steps in soon.<p>1. This guy has implemented his algorithm. This is a very good sign -- most garbage "algorithms" are exposed to be broken when people try to implement them.<p>2. Most 3SAT problems are "easy". Being able to solve a particular problem doesn't mean that the algorithm works in general. He would have done better to demonstrate his algorithm on "known hard" problems.<p>3. He states a running time of O(n^4 m), but his experimental results don't scale quite like this; perhaps his analysis is wrong, or perhaps there's just a monster hiding behind the big-O.<p>4. If he is correct, and his algorithm is optimal, we probably don't need to worry very much about cryptography yet: It looks like this algorithm is far too slow to be a practical attack on existing cryptosystems.<p>(EDIT: Oops, in the time it took me to write that, 18 other people posted comments. Well, so much for getting the discussion started...)
OK, I just upvoted purely for the reason of keeping it on the front page a little longer so someone more qualified has a chance to glance at this. Without any evidence and no qualification to judge myself, it seems highly unlikely.
Is anyone else struggling to understand theorem 2 on pages 15-16?<p>It sounds like what they are saying is equivalent to the following:
If S1 intersect S2 has a solution, and S1 intersect S3 has a solution, then the system S1 intersect S2 intersect S3 has a solution.<p>But this is evidently false. Consider the case where the CTS included each of the following rows, and were empty everywhere else (after re-ordering the columns so the same-name columns were in the same final column):<p><pre><code> (1) 000
(2) 001
(3) 0 00
</code></pre>
In this case, (1) and (2) are consistent, and (1) and (3) are consistent, but (2) and (3) are inconsistent.<p>I suspect the problem they set up the induction for might not perfectly align with the theorem, but it needs more careful examination.
Having read this paper (and being reasonably knowledgeable on the subject), I'm convinced this paper is wrong, or at least very underexplained.<p>The algorithm uses an algorithm similar to the well-known '3-consistency', which has been shown to solve quite a lot of classes of problems, in particular some that are solved very poorly by the normal learning-based systems used in most SAT solvers.<p>The paper aims to strengthen 3-consistency slightly, using permutations. However, while that is a reasonable strategy, it is entirely unclear to me, and unclear in the paper, while that gives poly-time solving time.
Just this months there's another paper on arXive that uses 3-SAT to proove P!=NP <a href="http://arxiv.org/abs/1101.2018" rel="nofollow">http://arxiv.org/abs/1101.2018</a><p>A list of articles published on the P=NP debate is here <a href="http://www.win.tue.nl/~gwoegi/P-versus-NP.htm" rel="nofollow">http://www.win.tue.nl/~gwoegi/P-versus-NP.htm</a><p>Looks like someone thinks they've solved the problem every month or so :)
What we should do is generate a class of instances of 3-SAT that are expected to be hard, and then try the solver on them and see what the runtime looks like as a function of the size of the input.<p>Recently someone claimed a polynomial-time graph coloring algorithm. I generated hard instances, their "solver" blew up. Claim debunked. It should be simple enough to do the same for this (for some definition of simple).<p>The key lies in generating hard instances. As cperciva has said in <a href="http://news.ycombinator.com/item?id=2121837" rel="nofollow">http://news.ycombinator.com/item?id=2121837</a> - most instances of most NPC problems are "easy."
tl,dr version of his strategy (as far as I understood it)<p>1. For a fixed permutation construct a Viterbi-like search on the triplet assignments - if it fails it is not satisfiable. However, if it doesn't fail right away, there is still no guarantee there is an assignment. Call this structure compact triplet (CTF) or whatever.<p>2. Constuct a small set of permutation (at most m) for which
every clause in the original CNF failing to satisfy will mean that at least one of these permutations CTFs will fail to satisfy.<p>3. Efficiently? combine the structures.<p>I didn't really read it deeply but that from what I understood that seems to be the top level strategy. I'm not 100% certain about it.
Where is the market among the HN community on betting this proof is correct (or the market for P=NP in general)?<p>My bid/ask is 0% / 0.02%<p>I think I'd wager at most a 1% chance that P = NP, and, I'll be generous and put the odds that this particular person cracked it first at 2% of 1% (I know he has code posted, but think of all the smart people who failed, and within my 1% is the case where P=NP but no human ever proves it). Would anyone offer better odds than 1 in 5,000, or make a bid?<p>My knowledge is limited to taking Sipser's intro class in school; but as a programmer, I always find subset-sum to be the most tangible and convincing example that NP-hard feels pretty tough.<p>I love these announcements though; I am always humbled and fascinated by the resulting discussion.
Zero other (english) publications by the author in the Cornell archive and only four references within. Not an indicator as to whether the paper is correct (I haven't read it), but that's "smelly".
I know nothing about N=NP debate, but does the claim P=NP and the existence of a published algorithm (<a href="https://github.com/anjlab/sat3" rel="nofollow">https://github.com/anjlab/sat3</a>) make this claim easier to verify than the claim P!=NP. Isn't the point that P=NP has great practical significance that will be immediately recognized?
There is an inkling prediction market on how likely this will be proved correct by the end of 2011: <a href="http://home.inklingmarkets.com/markets/34366" rel="nofollow">http://home.inklingmarkets.com/markets/34366</a>
Could it be ... P = NP? Most people suspected otherwise.<p>Someone should really verify his algorithm on various SAT sets. But I have to say that his approach is very good and humble... he suggests he may have solved the problem -- but it is up to us to verify! I'd like to follow this further, so I bookmarked it via an upvote.
I don't think one example constitutes a conclusion. While demonstrating the nonexistance of an algorithm for 3-SAT problem would prove P!=NP, the existance of an algorithm merely means "Move along, let's try a different difficult algorithm"