We don't have a result yet, but what is your opinion on what it may be? For example, P =NP, P!=NP, or P vs. NP is undecidable? Or if you are not sure, it is feasible to simply state, I don't know.
I think an equally important question is whether or not anyone will publish a work which might be so disruptive. Without getting into much detail about my research into this subject, we do know the following things: 1. If P=NP, all current cryptographic systems will be compromised and there is an interest in not compromising them. 2. Any result for P vs. NP will likely take a new, unestablished method or technique, which might itself have ramifications in other established disciplines, perhaps even the foundations of computation itself. 3. The P vs. NP problem is the problem of the Clay prizes that are most likely going to be solved by an amateur.
So let's suppose that P=NP and an unpublished, auto-didactic mathematician solves it and he finds a result that is, although true, contradicts some very established method. And let us choose, Cantor's Diagonal Argument, which is not without controversy, but is generally assumed to be correct, and that the Real numbers are strictly greater in cardinality through the principle of "uncountability". But what if there is a set, when utilized in Cantor's Diagonal method, yields a non-trivial counter-example to the method. And what if the set has the expressive power of the language that describes the Real numbers, such that the set itself can be seen as being bijective to the Reals?
Well, such a method would tell us the Reals are countable... so any paper that proves P=PSPACE (if the Reals are countable, it will follow that P=PSPACE, I wont go into proof of this here), would also have this ramification on countability and the cardinality of the Reals. So someone is wrong. But assuming this set exists. Let's begin there.
There exists a set whose cardinality is countable and whose language has the same expressive power of the Real numbers. OK. Now we can easily see through this set that P=PSPACE and P=NP So, no proof for P=NP could ever be published if P=PSPACE, because any proof that proves P=NP through this method or any equivalent method, through a non-trivial counterexample to Cantor's Diagonal argument, will also imply that the Real numbers are countable, and no one accepts that $2^{\aleph_0}=\aleph_0}$. And since no one accepts this, no one will review a paper that has these types of conclusions in them, even if they are true, because they only have to read the lemmas that lead up to the conclusion, and not the logic of the lemmas themselves.
So, I've been sitting on an answer to all of this for several years now, and I have no idea how to present it as my first publication, since I am self-taught in formal languages, anaysis and computation. Eventually, out of frustration, I just decided to strip all reference to the P vs. NP problem in my paper, even though I have an algorithm that solves a PSPACE-complete problem in deterministic sub-polynomial time, since I will need the exception to the diagonal argument to adequately prove P=PSPACE. And because of that prize, and the publicity that the problem has, everyone and their grandmother is trying to solve it. So, no independent research is being reviewed:
"This unpublished author has a solution to P vs. NP? Put it in the pile over there with all the others."
But for me it has more to do with the truth of the logic. I'd rather NOT get the prize, and have the truth of computability, countability and cardinality come out. And if someone else wants the prize for P vs. NP and publishes on it before I can, so be it, let them eat cake, at least I did the work, and I know in my heart that I did it first, and tried my best to find a reviewer. I'm currently waiting for the board at Journal of Universal Computer Science to agree to review the version that proves the existence of a set that yields an exception to Cantor's argument, since it does exist, is consistent with ZFC, and it is quite evident that it exists. I took all mention P vs. NP from the paper. But if not enough reviewers step forward, I'm not sure where to go next, or how to go about rewriting it to find another review board or what... there does not really seem to exist very many double-blind review journals anymore. And the lack of reviewers on auto-didacts from an important problem most likely to be solved by an auto-didact is a travesty for the scientific community.
/opinion
As I understand it P and NP have to do with time (it takes to solve a polynomial problem equal or not equal to non-polynomial time). With LINEAR 1D time, e.g., where one second follows another, or linearly Cantor's (and others') proof that Reals are more infinite than Naturals, I agree would then result in P=~NP. However, P=NP in the sense that P!=NP self-referentially (or perhaps Cantor's cardinality viewed infinitesimally; fractionally thus factorially, perhaps in nature related to the Fibonacci sequence and spiral), in CURVED, 4D+ space-time, binary 0's and 1's might nest the Reals (between I would dub Aleph N Cardinality 0 and 1, or ultimately 1, with counterfactual 0), which per Cantor's proof or to my eye prima facie the Reals nest the Naturals, in turn symmetrically now as mentioned in 4D+ spacetime, recursively once again nest 0 and 1 (each Natural through addition being so many 1's), ad infinitum, each cycle from the cosmos through to quanta considered a 'singularity,' yet complete and consistent.
They are both polynomial times, CJ, except NP is non-deterministic polynomial time and P is deterministic polynomial time. Those terms are a bit antiquated, and are now described differently than their original incarnation. Problems in P result from certificate questions, that is, if given an answer over some decision problem, does that answer solve that problem? Where as NP results from decision problems over a certificate type, and asks the question, does this problem have a true certificate? P#, asks us what that certificate is if it exists.
No one has been able to use diagonalization to prove P!=NP. So, just because if Cantor's diagonalization works, it does not imply P!=NP. However, if diagonalization does not work, and there exists an exception in some set over that method, it almost immediately follows that P=PSPACE.
you lost me some point in the beginning, but...
"to my eye prima facie the Reals nest the Naturals"
Of course the Naturals are a subset of the Reals, however, the question of countability is a question of representation, not belonging i.e., can the natural numbers "count" in such a way they can represent a bijection to the reals? And that is the essence of Cantor's diagonalization, which if a counterexample to it exists for some set, implies P=NP.
J., As I had explained, again: In 4D+ CURVED (v. linear, or Cantor's diagonalization approach, etc.) space-time, I give an emphatic Yes to as you say, "natural numbers 'count' in such a way they can represent a bijection to the reals." Perhaps a closer look at my response would help. Thanks.
I'm sorry, I'm just not sure what you mean by "4D+ curved space time". Also could you please tell me what you mean by "P!=NP self referentially"? Also, "binary 0's and 1's might nest the Reals (between I would dub Aleph N Cardinality 0 and 1, or ultimately 1, with counterfactual 0" could be clarified. I do know what Aleph_0, is which is what I assume you mean by Aleph N, but by counterfactual 0, do you mean the output on the string at point (x,x) on the diagonal in the matrix? Are you implying that the cardinality of the binary numbers (between 0 and 1) is less than the reals?
It's difficult to look closely at what you are saying when I'm having trouble understanding what you mean by what you are saying. You seem to be making physics references, so are you implying a field?
J., for a third time, I explained my meaning of 4D+ CURVED spacetime in the body of my first post and once again in parentheses (v. linear...) of my second post for you. Perhaps I did not make the D in 4D+ explicit, which in that case, I am refering to the commonly-used D as Dimension, also in context an understandable usage. Again, closer reading of my original (in more ways than one) post might provide more clarification, implications of which you may apply to any store of knowledge that you like (e.g., perhaps by Aleph N, I do mean Aleph 0, if so, thank you for that edit to my post; by varying Cardinality (without equivocating) through curved spacetime, I do mean 0 and 1 can be self-referentially/fractionally BOTH less than and greater than the Reals, and yes, corresponding to a curved spacetime field). Further clarification of this is copywritten and under publication, expected 2014. Thank you.
As far as I know, yes (but recall that there is a difference between "DECIDABLE" as it is used in logic and "DECIDABLE" as it is used in computability theory and here I mean the former). Scott Aaronson has written about this: see http://www.scottaaronson.com/papers/pnp.pdf
One interesting implication comes from Ladner's theorem. If P != NP (which I'd bet is so), then there exists at least one problem in NP that is neither in P nor NP-complete. I wonder what problems would fall in that category.
@Sajeevan Koilerian: I was not aware of this. Every computability theory text I own uses the same definition of decidable as every mathematical logic text I own (just differ in notation depending on the author) (I'm talking like Cooper, Curry, Boolos, Copi, Sider, Davis, etc... I have lots)... so I don't quite follow you. I looked at the article you gave and it only confirms that its the same. When we talk about the existence of an algorithm (in the computability theory sense) it is following from a computable function like when we talk about the decidability of a formal system we are determining if we can test each using a method of enumeration following usually if it can (by giving the algorithm) or giving a proof showing it isn't (usually is a reduction that was in one form a diagonalization proof). The only difference lays in how it is structured (one as a decision problem, one as a formal system, i.e., sets versus formulations).
@Cj: I am not sure if you noticed this or not, you stated the class NP incorrectly (you said non-polynomial which is a misconception). The P vs. NP problem is asking if the set of problems that can be solved in polynomial time (P) with respect to its input size is equal to the set of problems that can be checked in polynomial time (NP) with respect to its input size.
We know P is a subset of NP. We don't know if P=NP, or P is not equal to NP. The question I was posing is a simple decision problem. Its either Yes, No, or Undecidable (meaning there doesn't exist a proof or algorithm that will verify a yes or no instance when one were to compare the two sets (NOTE: this can be formulated in many different ways)). This is one of the most important problems in Theoretical Computer Science and Foundations.
For example) Given a 0-1 Knapsack instance (0-1 Knapsack is known to be NP-hard). Its decision variation asks is there a knapsack with profit at most (or at least if you like) C such that the weights do not go beyond W (knapsack capacity). This is easy to check an instance with as you just need to check it doesn't violate the capacity and whether the profit of the knapsack indeed within the limits.
The best known 0-1 Knapsack algorithm as far as I know is the DP algorithm (which has been modified to create PTASs in the past) which has time-complexity of Theta(nW) which is deceptively not polynomial with respect to the input size.
What P vs. NP asks is if there is a polynomial time algorithm that solves 0-1 Knapsack. Note, there are lots of theorems out there for other flavours of showing P!=NP or P = NP.
My favourite example is if you can come up with any c-approximation algorithm (where c is a constant) for Travelling Salesman Problem (TSP), then P=NP.
It's interesting to note that there currently exists an accepted pseudo-polynomial time algorithm for the knapsack problem: using a memo/dynamic program.
Worst case is still exponential.
Yes, same one I stated above. This is due to Knapsack being not NP-complete in the strong sense (for its decision version). If it were not, then a FPTAS would not exist for the non-decision version of 0-1 Knapsack. This just follows from the concept of strongly NP-hard problems and their relationships to FPTASs and pseudo-polynomial time algorithms existing for problems.
Daniel, Thanks for that correction and clarification, sorry for my mixup, an old misunderstanding of mine that apparently came out again. With what you said, my understanding is that the P-NP question asks if the solution takes as long as checking the solution, no? If so, then as I mentioned at the start, it is a question about time and time only, but here in the context of P-NP, or: Does checking a solution take AS (p EQUALS np) long as getting the solution, no?
If Fabricio's entry above is correct about Ladner's theorem, then from that theorem P does not = NP, with which I would agree and have in mind (until 2014 publication) as to why. In any case, at the moment, I need more time to provide a more considered reply here...
Cj:
TIME and SPACE are related computationally. PSPACE-complete problems are NP-hard, and thus solving a PSPACE-complete problem in P is also answering the question about it's relationship to NP.
>P-NP question asks if the solution takes as long as checking the solution
Yes, but "as long as" is a bit misleading, since, for example, NP# problems will need an internal verification upon any produced certificates, and thus must take longer than it's verification. However, if NP# (which is NP-Hard) is within the class of it's verification upon a certificate, i.e. P, then P=NP.
To solve that P!=NP, one must find an exponential lower bound over the circuits of NP-complete problems.
What is interesting to me now, is that if there exists a polynomial time algorithm for some NP-hard problem, it's constant could be very large over a large polynomial, still making it intractable.
Even if it is a large constant over linear time, say, the constant is 12! over linear time checks and modifications, P=NP (compare that to a brute force over 2^x, where x>29). But, such an algorithm would still take days to compute.
There are more approaches than this that people can attack. Another is the Unique Games Conjecture. That conjecture has some ramifications that can assist in attacking the P vs. NP problem, and also the hardness of approximation (meaning our approximation results will hold no matter what). As one would say, until somebody comes up with a result, the knowledge of how to solve this problem is elusive except for its logical structure in description.
For example, if I can approximate a solution of 3/2 or less for the makespan under the Makespan problem for unrelated parallel machines, then P=NP (because having such will allow me to solve another problem that is NP-hard). If UGC is true, then this may lead to a lot of lovely results about hardness of approximation which will mean not only will it be hard to approximate a good solution to a lot of NP-hard problems, but to approximate them well as such it will be as well. Hardness of approximation and UGC allow us to have another route. Though the conjecture is quite elusive, it has lead to a lot of work in computational complexity in relation to the P vs. NP problem and approximation algorithms.
Gerhard Woeginger has a web page dedicated to the P vs. NP problem:
http://www.win.tue.nl/~gwoegi/P-versus-NP.htm
This page includes a list of papers that claim to solve the P vs. NP question. It also includes a link to a 2002 poll (made by William Gasarch) of computer scientists (and mathematicians, I guess) concerning their opinion on the P vs. NP question (e.g. when do they think that this question will be resolved). In 2012 the poll was repeated and the results can be found here:
http://www.cs.umd.edu/~gasarch/papers/poll2012.pdf
Personally, I would like to disagree with the notion that "The P vs. NP problem is the problem of the Clay prizes that are most likely going to be solved by an amateur." I do not see any reason why such a statement should be true: we do not know the answer and we do not know how to obtain it, so we do not know who will be likely to solve the problem.
If you look at things from a non-technical perspective, it is difficult to see how the so-called PRISM project which is currently making the headlines over the news could have been implemented if the set of openly-available cryptographic methods were not at least "fragile". Furthermore, this echoes the tight regulations in place in most countries with respect to data protection and data security which is also making headlines in terms of the cloud computing paradigm.
Here are a few things I was able to gather over the years on the topic.
It is known in the research literature that AES (and others) can be reduced to NP-complete problems by defining a system of associated polynomials. Furthermore, hard problems such as factorization and discrete logarithms are known to be vulnerable to Gentzen systems/Turing oracle approaches. This leaves SHA-2 hashing schemes which are also known to have representations in Hilbert spaces.
Furthermore, the proof by D.M. Kane that "Unary Subset-Sum is in Logspace" has clear implications with respect to the P=NP hypothesis if seen from the perspective of the representation of cellular automata. If my memory is good, the paper is not listed in the www.win.tue.nl list.
The main issue with respect to the problem is that there is no non-tacit consensus in the research community at the moment on the question. Serious mathematical monographs and papers diverge on the likely answers.
If we do some history on the subject, Stephen Smale's papers on the problem (the P=NP problem is also termed Smale's 3rd problem) state as a conjecture: "There is no polynomial time algorithm over C which decides the Hilbert nullensatz." However, the reduction of the P=NP question to a problem as canonical as the Hilbert nullensatz is itself troubling.
But like they say: a lock is better than no lock...
We don't know. Whether verifying given formal proofs are "easier" than creating them? Generally, we "agree" that creating proofs are "harder" than verifying them. But sometimes we prefer to create a proof rather than verifying some other proofs! So, I think that P vs. NP is undecidable (and leads us to make a new axiom as we did in Geometry).
The relationship between the classes P and NP has solved to in papers:
http://thescipub.com/pdf/10.3844/jcssp.2012.1036.1040
http://www.bapress.ca/ccc/ccc2013-1/2_13051501_Paper_Plotnikov.pdf
i think this question belong to philosophy not science, the current definition of NP should be changed to include the concept rather than specific algorithms; i.e. solving one NP problem in P time should not necessarily mean that p=NP, and vise versa
Dear colleagues, maybe this article will take some new information into your discution http://link.springer.com/chapter/10.1007/978-94-007-7684-5_22?no-access=true
Very interesting, Algirdas. I do happen to think that P=NP (and have something of a proof of my own, although it relies on an initial paper I have in review currently). But I still have some doubts on your proof after an admittedly brief look at your paper.
1. It does not introduce a new technique or concept in regards to computability. I would expect something out of the current computational paradigm or a technique that has not yet been utilized.
2. And while your paper does seem to be knowledgeable about the problem (Many I have read are not), 2SAT has been known to be in P for some time now, so I wonder why you would spend an entire section with a proof on 2SAT. Of course anything expressed in 2SAT can also be expressed in 3SAT, it's just that the time to solve the 2SAT problem increases non-trivially with respect to the 3SAT formula as the number of clauses increase. What I am not convinced of quite yet, is that when you convert your algorithm from 2SAT to 3SAT.
3. A 3SAT boolean formula where all its variables are unique is trivially solved in P. I can not yet figure out why you might label this as the general case and then proceed to the "special cases" where they are not unique. I am not convinced your "special cases" are in P.
How might I be convinced? Some combination of the following:
1. Don't use 2SAT to solve 3SAT.
2. Provide us with benchmark results of a working algorithm which is able to solve High Backbone K-SAT problems for K>=3.
3. Provide a counter-proof to Turing's limitations found in his paper on the Entscheidungsproblem. (Disprove the Church-Turing Thesis)
4. For me personally, if you have an algorithm that is solved in linear time over some constant (it would point to similar results as mine) instead of cycling through a linear time cycle with a polynomial alg. as you do in your paper. ref. "Now cycle through n variables must be repeated to rename them to X´k so that new replaced variables marked as not unique will be negations which must be replaced with 1 − X´k, if they found second time." This type of cycling is immediately suspect to be exponential.
But again, just a super brief look, caveat emptor.
Maybe this two articles explain a litle more:
http://arxiv.org/abs/1012.5804 (multilogic+nondeterministic Turing machine = linear string symbol counter) or in other words binary logic does not usable for NP
http://www.inderscience.com/info/ingeneral/forthcoming.php?jcode=ijor
( How to solve kSAT in polynomial time by Algirdas Maknickas - O(n^3.5) complexity for NP)
I will hopefully be able to offer a better quality response later, but my inclination and intuition tells me to say that "multi-logic" is reducible to the current understanding of qudits (d-valued versions of qbits) as seen here:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.154.1225&rep=rep1&type=pdf
Granted, at a quick glance, this one is more convincing than the previous paper you supplied... mostly because the definitions are clear and you reference to Fagin's Theorem 24 which is also quite clear. and then you provide a solution to a constant. I suppose now it is just a matter of checking if that constant is true or false. Unfortunately, I currently can not take the time to do so until my schedule has cleared up a little bit. I also must say that this paper is very close in some ways to my work currently in review, which actually gives me some thought to think that maybe you did it. I took a different approach, but I would expect something like "Multi-logic" to exist. Again, I'm not looking close enough for a thorough comment right now.
I did notice that your constant changed from 3 to 6 in the different versionings, which provides some caution that the solution might not be constant. But, somehow, I would intuitively expect the constant to be 6 based on my work since the formal language I created has a minimum of 6 independent symbols for the proof to work. To reiterate, this is not a formal review, just first impressions.
See it like this, for simplicity:
1. Assume Shannon's theorem about mathematically provable secure cryptography is true: there is no mathematically provably secure encryption methods other than the one time pad.
2. Assume Cook's theorem about NP-complete problems.
3. We know that some encryption methods are based on the NP-complete knapsack problem: Merkle-Hellman cryptosystems for example.
4. Therefore, we can safely conclude that such cryptosystems are not secure and P=NP...
All this assuming that what Shannon meant by provably secure did cover exponential time problems...
It's a heuristic proof that should however incline in one direction.
The notion of polymorphic cryptography has been discussed recently, that is, cryptographic systems which change completely morphologically and often enough that the attacker gets confused... Other methods have been devised to try to make the one time pad practical...
This of course is, arguably, the seminal problem -- does P=NP? If one accepts counterfactual definetiveness in QM (and CT as true, of course), then the problem takes on a whole new dimension from the persepctive of Bell's Theorem -- perhaps, even more interestingly, different conclusions suggest themselves depending on whether inhomogeneous (IBI) and homogeneous (HBI) Bell inequalities are considered. The epistemic, metaphysical, and foundational repurcussions of such an analysis may well prove especially profound.
http://vixra.org/pdf/1705.0226v1.pdf
It might be useful. It is presented in:
https://www.uni-log.org/ss6-COM.html
Dear Dr Ohto
I believe P = NP. But proving it might take the lifetime of our universe.
Some people may believe that the answer to this question is settled and that it does not merit discussion. I have collected empirical data from thousands of hours of experiments with NP-Hard problems and I have strong evidence that P and NP are separate except for "small" problems.
First I will refer you to one of the papers that I have posted on RG:
Empirical Analysis of Microsoft Project Scheduling with Resource Leveling
Figure 2.
This figure shows the distribution of solutions (durations) of schedules produced by four separate methods, each yields a Gaussian distribution.
On reflection, it should be obvious that NP-Hard problems,
where the metric value of a solution is determined by the sum of n-independent sub-elements, will satisfy the criteria of the central limit theorem.
I have attached 4 files that illustrate this conjecture.
I have performed thousands of hours of testing and have concluded that all NP-Hard problems that satisfy this criteria will have a Gaussian solution space and the optimal solution will be found at the extreme end of the bell-shaped curve.
Additional information, although shrouded in the language of patents, can found in the following.
Method for benchmarking combinatorial problems and their solutions,
US Patent US8543524B1, 2013, Fox, BR and S. Jowers.
Benchmarking progressive systems for solving combinatorial problems (divisional),
US Patent US8849733B2,2014, Fox, BR and S. Jowers.
Benchmarking progressive systems for solving combinatorial problems (divisional),
US Patent US8983881B2, 2015, Fox, BR and S. Jowers.
Solving a combinatorial problem using a quality metric value of a characteristic solution
US Patent App. US20160350072A1, 2016, Fox, BR and S. Jowers.
Solving a combinatorial problem using a quality metric value of a characteristic solution thereof
US Patent US9990178B2, 2018, Fox, BR and S. Jowers
And most important please consider
Solving a combinatorial problem using a quality metric value of a characteristic solution thereof
US Patent US9990178B2, 2018, Fox, BR and S. Jowers
where I show that it is possible to "estimate" the metric value of the optimal solution to a combinatorial problem
without actually finding the optimal solution by determining the metric value of a sequence of sub-problems of increasing size.
If there is sufficient interest, I will repeat some of my prior work on large TSP problems.
Daniel, I provided an esoteric answer to the problem previously, but if you are still interested I may be able to provide a substantive, substantially sound answer for your question, with one caveat (In the absence of some mathematical and computational miracle P is not equal to NP). What have you concluded?
The equation P=NP holds, as proved in the article PNP.pdf.
See my post from April 18. The short answer: it is a matter of scale. You may be able to find the optimal solution to a Traveling Salesman Problem of size 100, but if you believe that the Traveling Salesman Problems can be solved in Polynomial time you must show that your method can solve Traveling Salesman Problems of increasing polynomial size (linear or quadratic for example) with increasing effort (time) no greater than the polynomial factor. One of the largest TSP problems (18,512 cities in germany) required the equivalent of one year of computer time to solve.
However, there is no need to find "real-world" TSP problems. The best approach is to select a square or circular shape and to generate a set of random points within that shape and to create a family of TSP problems of ever increasing size by increasing the number of points within that shape. Anyone reading this note can within an hour write a program that will create this ever increasing family of problems, and they can at the same time "solve" each member of the family with their preferred TSP solver and by tomorrow morning be able to report the relationship between problem size and time expended by the solver. They will diverge.
To prove that P=NP you must prove that there exists a solver that can solve the traveling salesman problems of increasing polynomial size ) with increasing effort (time) no greater than the polynomial factor.
Typically such proofs rely on "induction" beginning with a first problem of size (n) and then proving mathematically that the necessary relationship holds for a problem of size (n+1).
Unfortunately, the solution space of the most common NP-Hard problems grows exponentially (n - factorial) with the number of elements in the problem (n)
This is an interesting question, and if you believe that you can prove P=NP, first try these experiments and try to prove using simple induction.
Look at my answer in cs stackexchange.
https://cs.stackexchange.com/questions/91728/how-are-boolean-circuits-used-for-solving-p-vs-np/140988?stw=2#140988
Dear Profs. Page, Tan and others,
We have proved the PvsNP problem, (Mallick, Hamburger & Mallick (2016, 2019)) which is listed , one version, on my Research Gate page. We have won the Millenium Mathematics Prize for it and for the Field Theory we may win the A.M.Turing Award. Sorry, for pressing on you our elation. Thanks.
Earl of York Sir Soumitra Kumar Mallick, corresponding author
Time is money. So we have proved using Stock Market Experiments of Internetworking and globalisation and continuous online trading which proves the Fundamental Mathematical computability and rationality of market mechanism, which transcends all Nobel Prize Sciences including Computer Science, that on successive dates the proof that Stock Price is P(X) and the verification next day of that fact by that days Stock Price quilibrium model of the same stock exchange (market) is proved with various statistical confidence levels. Hence, we cannot a.s. infer that P= NP. Mallick, Hamburger & Mallick (2017). This establishes experimentally the PvsNP problem the sustainable ECommerce Internetwork Cybernetic Spacetime field. Q.E.D.
I should add All of the above at RTP because the condition is competitive interest rate = risk free rate + efficient market risk premium which holds at RTP where everyone can participate. SKM, SRC, SM of RHMHM School at IAS Princeton, USA.
Whether the problem existed in the first place solely depends on who is actually knowledgeable about it:
As for our human but also any other “intelligent” species, knowledge can easily be forgotten or being misinterpreted in course of history, including but not limited to the P vs. NP question.
N.B. algebra incl. our current base10 numeral system was developed some 1200 years ago in classic Arabic only to be translated into common languages like Greek and Latin in the 11th and 12th centuries
https://www.ted.com/talks/terry_moore_why_is_x_the_unknown
Let’s assume that the P vs. NP question is publicly unanswered until today and let’s further assume, that whoever knows its answer is not only smart but also has some broader wisdom. In light of the global conflicts our species fights and with respect to its complexity-based cryptographic infrastructure, only four out of five possibilities to answer the question would be subject to immediate public debate:
The fifth possibility of P=NP with a constructive proof (hence, an algorithmic procedure i.e., computer program) and a sufficiently small constant/polynomial would represent a Black Swan Event not only to the current cryptographic infrastructure but also to all other possible areas of application.
Whoever is knowledgable about the fifth possibility meeting common principles of faith and civilization would certainly try to assess if there is a tradeoff for the human species to make the knowledge public domain.
One thing “whoever” would know by simple logic reasoning is that if “whoever” knows anybody else could also know.
Knowing that anybody else could know implies that it could already be known by anybody else.
With this knowledge it could be assumed, that in contrast to “whoever”, this anybody may not share the same wisdom.
In that case, although not publicly known, the fifth possibility may be already in action.
If it is already in action but not publicly known, it can be assumed that the fifth possibility is not subject of any tradeoff for the human species but only for a few at the most. Historically, knowledge was always subject of elites, high priests and organizations thereof with hermitic languages being the preferred choice of record.
Accordingly, “whoever” must continue to consider to make the knowledge public domain shall it serve progress of civilization.
How could “whoever” possibly proceed with respective SWOT analysis?
Although esoteric at the first glance, “whoever” could search for heuristic metaphors such a represented by the Fermi Paradox and its dissolution. https://youtu.be/sNhhvQGsMEc
The Fermi Paradox argues, among others, that any intelligent species may extinct itself when it reaches a certain technological level (great filter), e.g., nuclear technology or with reference to this debate: the fifth possibility. This valid argument of great filters would then explain, why we are not knowledgeable of intelligent alien civilizations.
Evidently, any knowledge about technologically advanced, intelligent alien civilizations would represent a definitive dissolution of the Fermi Paradox, hence of no apodictic existence of great filters, irrespective whether publicly know or just by “whoever”. https://youtu.be/ZBtMbBPzqHY
https://youtu.be/5v0y99VM9eA
With this heuristic metaphor at hand, “whoever” would know that the non-existence of a great filter in conjunction with the fifth possibility would not be a sufficient condition to bless humanity, yet, a necessary one.
Now with “whoever” continuing to strive for blessing humanity, any consideration to integrate the fifth possibility vertically with licensing and/or a dedicated enterprise business may safely be ruled out.
This is because “whoever” would know, that although the public enterprise space is honorable to a large extent, only one bad player would suffice to enslave the species even more (for a while), while going enterprise is not defendable by any means, not even with all the even and odd US Navy fleets at disposal.
It should be apparent by now, that “whoever” would not even consider to let the the fifth possibility be an asset of a nation state or any treaty thereof.
What is left for “whoever” to make the fifth possibility publicly known?
A horizontally integrated, collaborative cryptocurrency leveraging the fifth possibility for value creation and safe against its own method with a million or more bit public key and being an irreversible part of the global communication infrastructure such as Bitcoin?
Or a Quantum Cryptocurrency with bullet proof public key distribution run by monopolies such as the defence sector or nation states?
Just a rumor about the fifth possibility may disrupt the current cryptographic infrastructure, the backbone of todays digital privacy&security, property, trade, currency etc.
Is there a tradeoff?
The answer may be found in astro-politcal discussions in conjunction with non-tangible trade. Without currency.
https://www.tandfonline.com/doi/abs/10.1080/14777620801910818
In short: As of 2022, we don't have a publicly known result yet. Shall it ever become public domain, it will be available under WTFPL.
http://www.wtfpl.net/about/
In short: As of 2022, we don't have a publicly known result yet. Shall it ever become public domain, it will be available under WTFPL.
http://www.wtfpl.net/about/
Whether P=NP or not is still an unsolved Clay Institute Millennium Problem. My view is that PNP, simply because it seems much easier to check a solution to a decision problem that to produce a solution. This intuition is related to the fact that a program running on a non-deterministic Turing machine is a computable functional over an exponential time computation tree of possible state transitions. Whether P=NP can be rephrased as to whether the computable functional is polynomial time or not.
Andrew Powell ironically, "intuition" is no good advisor for complexity --> Wheat and chessboard problem
https://en.wikipedia.org/wiki/Wheat_and_chessboard_problem
Karim Daghbouche you make a reasonable point, and the reason P?=NP is an open question is that no one has a convincing argument for it one way or the the other. The fact remains though that it is a lot easier to verify a solution than it is to come up with one, meaning that you can write a program that checks any solution against accepted axioms (or a library of theorems) subject to the overhead of formalising the proof (which can be a very large overhead), but no one knows how to write a program that does significantly better than brute force for deciding an NP-complete problem in the general case. Is this "intuition"? Well, in the sense of a short cut for thinking based on experience it is.
Andrew Powell thanks for the prompt and qualified follow up - please allow couple of lines
P?=NP was an open question and still is for the general public where grisdat.eth.link provides all resources to check:
A) formal step-by-step arguments with three independent constructive proofs that P is indeed NP
B) because of the constructivism (vs. indirect argument)
i) the open source code provided is an efficient SAT-solver with CNF/DIMACS input outputting BDDs where SAT is NP-complete (but in P with an upper bound of M=4 for M=#clauses)
ii) four patents with one issued by USPTO
C) while parallel computing is NP-complete, the SAT-solver scales linearly with no limits
regarding intuition and the very intention of our conversation, we need inspiration to feed/adjust our intuition - further to the latter, just consider the origins of Algebra in classic Arabic 1200 yrs ago and its abstraction with Boolean algebra to conclude that ambiguity associated with Latin indices for VARs such as X may be the main cause for exp blow-ups of truth tables or their equivalent, i.e, BDDs.
now try to reduce (semantic) ambiguity with syntactic indicators to always resolve SAT efficiently while considering the semantics of SAT (VARs) only to be either true of false (0 or 1) with the possible value combinations in truth tables always to be constant independent of VAR-names (which are arbitrary) and VAR-meaning (the semantics of VARs are solely defined by the SAT-reduction hence only known by the ones who reduce but never by the ones who resolve the SAT-expressions)..
finally, consider constructive P=NP with low upper bound and easy straight forward implementation fitting on thumb-drives as not defendable by any contemporary nation state or alliance, hence, not qualifying for any business model.
this is when intuition shall kick in and deem it consistent, that P=NP is:
A) neither known to the general public nor general academia
B) embedded in the web3 (IPFS) for max transparency and censor-ship resistance gridsat.eth/ (IPFS browser such as Brave, Opera or Chrome required - else gateways such as gridsat.eth.link or gridsat.eth.limo make the deal for gridsat.io to resolve with http for the time being
C) Open- and Free Everything
however, it would be intuitive and logical that P=NP!, non-constructive P=NP or undecidable P?=NP would have paved its way to the general public just as the whole P?=NP made it somewhat to become prominent..
Karim Daghbouche, that was a very prompt response. I think P?=NP still is an open problem (see https://www.claymath.org/millennium/p-vs-np/), but I will look at the links you have provided. (Mind you I am not an expert.) My favourite NP-hard problem is the subset sum problem, which has NP-complete versions. I like it because it is easy to understand, in one version whether a set of integers has a subset whose sum is 0. How to do this in general without enumerating all subsets and adding up the members?
Andrew Powell enjoy the link with plenty of visualisations including a 45min. lecture - since the NDP "only" resolves SAT-problems where all NP-problems can be reduced to SAT, one needs to reduce subset sum to SAT as discussed here: https://cs.stackexchange.com/questions/107376/reduction-from-3-sat-to-subset-sum-problem
just one note on logical fallacies we are all tricked in: if ones intuition assumes P=NP! making it an axiom to tranquillise and relax effort-making, there cannot be any other conclusion than P=NP! - evidently.
if one would go with Clay and all the other highly esteemed, prestigious organizations alone and not considering then unknown archives such as Arxiv, the Poincaré conjecture would still be listed as unresolved..
Andrew Powell the gridsat links work with copy from this very discussion and paste into browser or gridsat.io shall work as well (it is http relayed to IPFS) - else just google for GridSAT Stiftung - sorry for the inconvenience some of the gateways incl. researchgate may have caused
Thank you Karim Daghbouche . I still cannot reach your links. The error is "DNS points to prohibited IP".
The polynomial time reducibility of NP-complete decision problems to 3-SAT is a classic result of the literature due to Stephen Cook and Leonid Levin (SAT is NP-complete) and Richard Karl (SAT is polynomial time reducible to 3-SAT), See https://en.wikipedia.org/wiki/Cook%E2%80%93Levin_theorem.
The problem with 3-SAT is that it not as easy to understand as the subset sum problem (SSP). I also find it easier to check whether an algorithm runs in polynomial time on SSP. But that is probably a personal preference, and would make no difference to a computer.