Karela Fry

Just another WordPress.com weblog

P is not NP: a proof perhaps

leave a comment »

The blogosphere is agog with news of possibly the biggest story in computer science, a serious proof (yet to be checked) that P≠NP, first broken in Greg and Kat’s Blog, which quotes from an email circulated by the author, Vinay Deolalikar, of HP. In his home page, Vinay Deolalikar says:

P is not equal to NP. 6th August, 2010 (66 pages 10pt, 102 pages 12pt). Manuscript sent on 6th August to several leading researchers in various areas. Confirmations began arriving 8th August early morning. The preliminary version made it to the web without my knowledge. Please note that the final version of the paper is under preparation, and is to be posted here shortly (in about a week). Stay tuned.

In Godel’s Lost Letter and P=NP, Lipton says:

My first thought is who knows—perhaps this is the solution we have all have been waiting for. If it is correct, the Clay [Millennium Prizes] list will drop down to five. I assume he would take the prize.

But first there is the small issue of correctness. Is his paper correct?

I suggest you look at his paper to see his own summary of his approach, and of course the details of his proof. At the highest level he is using the characterization of polynomial time via finite model theory.

Then, he attacks SAT directly. He creates an ordered structure that encodes SAT. He then argues if P=NP, then by the above theorem it must follow that SAT has certain structural properties. These properties have to do with the structure of random SAT. This connection between finite model theory and random SAT models seems new to me.

Me? I’m no expert, but I’m about to sit down with a copy of the preliminary draft to see how far I can get into the claimed proof.

The next day

Not very far without help, as it turns out. However, some help is at hand. Lipton has a blog on the issues that the (alleged) proof raises: do look at the comment threads, because serious stuff is going on there. In addition, Suresh Venkatasubramanian has started a Wiki to crowdsource the check. Even Terry Tao popped into the discussion to point to PolyMath. There goes my leisure time.

A week later

Lipton published an email from Neil Immerman which points out a possible hole in the proof. The closing sentence of Lipton’s blog is:

Most of us have focused on the issues of 2-SAT and XOR-SAT compared to {k}-SAT, but Neil’s comment points out serious flaws in the finite model part. It was the connection of this work with the random structure of {k}-SAT that excited many of us in the first place. Unless Neil is wrong, it seems that his points are very serious.

About 10 days later

Vinay Deolalikar has the following statement on his website:

The preliminary version was meant to solicit feedback from a few researchers as is customarily done. It illustrated the interplay of principles from various areas, which was the major effort in constructing the proof. I have fixed all the issues that were raised about the preliminary version in a revised manuscript (126 pages); clarified some concepts; and obtained simpler proofs of several claims. This revised manuscript has been sent to a small number of researchers. I will send the manuscript to journal review this week. Once I hear back from the journal as part of due process, I will put up the final version on this website.

Lipton comments:

The passage of time and recent coverage may be obscuring the fact that privacy was Dr. Deolalikar’s original wish—and that he was “outed” by a sequence of forwarded e-mails “a couple of steps removed” (original source). We feel that any effort to draw conclusions beyond the fact that his previous releases needing fixing, and to declare what his further actions should be, should be tempered by this realization. We have expressed disappointment at the lack of response beyond the synopsis, but what he is doing now is in line with his original intent.

Some things I’m only just catching up on

Communications of the ACM (CACM) carries a guest blog by Lipton where he writes why he thinks Deolalikar’s attempt is significant:

I noticed a couple of things right away about the paper. It was long, and lacked detail in many places; but the paper seemed quite interesting and serious. The proof makes an interesting connection between two very different areas of theory. Deolalikar used a special way to define efficient algorithms that is based on logic. This method of defining efficient computation does not use machines at all—it is called “finite model theory.” His proof also used an area of statistical physics.

Two things struck me immediately. I had never seen the connection of these areas in any attempted proof of Q. Also, the area of finite model theory has had a number of tremendous successes before. Without going into any details, one of the principles of mathematical proofs is that often there are multiple ways to define something. Many times one definition, even though it is equivalent to another, is the right way to prove something. Deolalikar’s use of this logical theory seemed to be potentially very powerful.

The more serious flaw is Deolalikar’s use of results from statistical physics. This involves the statement D that I mentioned already. It is the issue that has attracted hundreds of comments, the creation of a whole wiki dedicated just to the issue by Terence Tao. And it is the most interesting potential flaw, since even if it is incorrect we all seem to feel that it raises a number of intriguing questions. One important point is while a proof is eventually either correct or not, many incorrect proofs over the years have introduced important new concepts to mathematics. It is quite possible that Deolalikar has done this. Time will tell.

Also interesting is Terry Tao’s comment on Lipton’s blog post:

If nothing else, this whole experience has highlighted a “philosophical” barrier to P != NP which is distinct from the three “hard” barriers of relativisation, natural proofs, and algebraisation, namely the difficulty in using average-case (or “global”) behaviour to separate worst-case complexity, due to the existence of basic problems (e.g. k-SAT and k-XORSAT) which are expected to have similar average case behaviour in many ways, but completely different worst case behaviour. (I guess this difficulty was well known to the experts, but it is probably good to make it more explicit.)
.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: