EDIT: after this note, the author removed the tag "Computational Complexity Theory" and added the more appropriate tag "Complexity Theory".
Just a quick note (I saw that this question has been (wrongly?) tagged with "Computational Complexity Theory): "complexity theory" studies complex systems; the relations with chaos theory are quickly explained in the Wikipedia entry (http://en.wikipedia.org/wiki/Complex_systems#Complexity_and_chaos_theory). "Computational complexity theory" is a very different field; it is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. For a quick introduction see the Wikipedia entry: http://en.wikipedia.org/wiki/Computational_complexity_theory or a good book like M.Sipser, "Introduction to the Theory of Computation".
The short answer is that "chaos theory" sells books to the general reader. Complexity, complex systems, nonlinear systems, dynamical systems, and the related notions/properties of these as well as how they overlap (e.g., ODE's, phase space, bifurcations, synchronization, etc.) tend to describe "chaotic systems" in more technical literature. On the other hand, "chaos theory" (thanks perhaps mainly due to Gleick's popular science book) tends to be neither well-defined nor usable (and consequentially little used) within the sciences.
There is no single definition of complexity. Computational/algorithmic metrics are one such measure. The basis for many measures is Shannon's extrapolation of entropy in physics to information theory (and basically creating information theory in the process). However, there are many more ways to define or quantify complexity. A system, such as a network of supercomputers, can have an enormous number of "parts", yet be simpler than 'simple' than living systems weeds, ant/bee colonies, or even microscopic organisms. Complexity measures that rely on the possible configuration of a system's parts tend to vastly underestimate complexity due to functional processes or properties that emerge from systems with fewer parts but that lack the compartmental organizational of computers or the limitation of configurational phase space found in similar many-part systems lacking the self-organization, self-determination, and complex intra- and inter- interactions with component parts (in the former case) and external factors (in the latter).
Complexity theory is a field of theoretical computer science and mathematics, Chaos theory is a field of mathematics. So no. They are interested (in general) in different questions (as a whole), so they can't be equal. One is interested in classes of problems and forming them based on numerous techniques. Chaos Theory is more of an overlapped area of many empirical sciences, and mathematics relating to the behaviour of chaotic system where sensitivity analysis is vital.
1) Complexity theory is not just a field of computer science and mathematics, as among other things more work in complexity theory is published in systems sciences than in computer sciences.
2) "Chaos theory" is not generally either a mathematical or any other type of theory. It was used and is sometimes still used in technical literature, but while catastrophe theory is now largely unknown to the public "chaos theory" remains despite the adoption of different sets of terminology and the relative infrequency of "chaos theory" in academic discourse (terms such as "complexity", self-organization, self-organized criticality, emergency, dynamical systems, nonlinear systems, etc.).
3) Computational/algorithmic complexity grew out of communications theory and was inspired by and mostly constructed by physicists.
4) It is also extremely limited in scope.
5) Complexity is best seen as an interdisciplinary field or research area as nonlinearity and "chaos" are ubiquitous in real-world applications, models, theories, etc. Just check out the journals and monograph series published under Springer complexity (http://www.springer.com/physics/complexity?SGWID=0-40619-6-127747-0)
5) Don't take my word for it:
Walby, S. (2007). Complexity theory, systems theory, and multiple intersecting social inequalities. Philosophy of the Social Sciences, 37(4), 449-470
Good points Andrew (with one major exception, see (4), and some bits of (3)). I see that the original author of the question changed the tags again, so it is strictly the vague term complexity theory which can associate with a variety of things. Either way, to clarify my answer, then it is definitely not the same thing as chaos theory, or is partially ambiguous as theoreticians often refer to computational complexity theory as complexity theory). So it usually relates to who you talk to.
From what I know, chaos theory is where you set up a system of initial conditions, then let a dynamic system run. The analysis of such systems is fundamentally mathematics, but is cross-disciplinary in its applications. Sensitivity analysis lays at its heart. Simulations often are how they are implemented in practice. Say if I execute a simulation based on initial parameters, and let the system run, what is the outcome. Usually chaos theory relates to systems where the outcomes can become unstable or widely divergent from an initial state. Usually they model things that are actual systems that can vary greatly, and the goal is to find some similarity or pattern in the mess. Great example is the n-body problem. Though I will mention that the n-body problem is also studied in dynamic systems.
@Andrew: Can you clarify by what you mean by bullet (4)? As a specialist in that area, I would love to hear why somebody in psychology (not an area of Computer Science or Mathematics) would think computational complexity theory or different models of what you may call "algorithmic complexity" are limited. It can tell us MANY things so far, here are some examples:
1) Existence of whole classes of algorithms. (Great example, the existence of FPTAS for particular problems. It has an inherit relationship with pseudo-polynomial time algorithms, and the technique of dynamic programming)
2) Existence of approximation algorithms (Great example, hardness of approximation; the ability to ask if there is a p-approximation algorithm based on the computational complexity hierarchy)
3) A complexity hierarchy relating to problems. We can throw entire classes of problems into bins and ask about algorithms that exist for them, or not exist for them.
4) Brief clarification about your point (3). Most of what grew into modern computational complexity theory didn't arise about from communications, it came from looking at optimization problems; though some optimization problems are in communications. See Stephen Cook's famous paper work for more information (anybody who can comment about CCT knows what this is, so I'd assume you know where to find it). Older methods that are still studied and built upon that need not be at the heart of computational complexity theory (but are still a part of it). Great examples are Kolmogorov Complexity, and entropy. Many of these methods still find a hearty place in communications, but now in areas such as bioinformatics, and still cryptography. For those methods, you are correct that much of the research around them derive from developing results in telecommunications (e.g., work pioneered by Shannon). These are all areas of study in Theory of Computation, and relate to computational complexity theory, but some find their applications in Engineering, Physics, or even Biology.
5) Maybe in the future, something more than what we have about P vs. NP (a fundamentally vital problem in Theoretical Computer Science).
Many results in this area are vital to asking questions about intractability of problems. It is the one major side of a coin in the foundations of Computer Science. The other major side of the coin is Computability Theory and other sub-fields concerned with decidability.
Hope this clears that up! I just needed to "nip" that one in the butt since it is a common misconception, and as somebody who knows a thing or two about the areas in question and the history of computer science, I thought I would comment.
Thanks for all gurus answer. For daniel can i have a direction for me to start working on complexity theory. Since i m in a project to define sub cultural pattern analysis to find cultural dna in our country
Since this is a system you are likely interested in. Try to formally define the problem by creating a model. What do you have as input, and what do you want as output. See if it maps to anything that is known in the literature when described in formal terms (mathematically). For example, lots of bioinformatics problems can find their roots in NP-hard problems, while some others are efficient to solve.
Here is an example:
Longest common subsequence (LCS). In biological terms, it could be: Given two pairs of DNA, find the longest (potentially non-contiguous) sequence of A,C,Ts between the two sequences. This can be recast as follows:
Given two strings X and Y, find the longest sequence of symbols where the symbols are common (but need not be adjacent in positions) from left to right. This problem can be solved via dynamic programming.
Now this is where it gets interesting, suppose you have instead of 2 DNA sequences, what if I want n DNA sequences, and do the same process. This then becomes more like the n-body problem. That is one thing you could analyze, but I am sure there are more interesting things to analyze for DNA strands. One example is trying to form phylogenetic trees so you can see the relationships between the DNA strands as a tree.
For myself, that's as much as I probably could help you with. I do not have enough expertise in DNA analysis to be much further help unless it relates to algorithms and analysis in that regard (maybe ask a person in bioinformatics, they may be aware of some techniques). I know in Bioinformatics, they look at some problems like this when it comes to computation.
I'd be happy to clarify, although I can only do so briefly now. First, regarding bullet point- I’m not a psychologist. Well, computational neuroscience could still be considered psychology, but my main focus is complex systems. The brain is ideal for this, particularly as theoretical biology and systems biology (unlike neural models) have eschewed algorithmic complexity and the founder of systems biology’s argument that biological systems must have noncomputable models has been continued perhaps centrally through Louie in papers like
Louie, A. H. (2007). A living system must have noncomputable models. Artificial life, 13(3), 293-297.
As for the relevance or utility of computability theory, you might want to check out the first issue of the first volume of the journal Complexity, by Gell-Mann:
“students of computational complexity are typically concerned with whether a sequence of larger and larger problems can be solved in a time that grows as a polynomial in the size of the problem (rather than an exponential or something worse). It is probably safe to say that any measure of complexity is most useful for comparisons between things at least one of which has high complexity by that measure.
Many of the candidate quantities are uncomputable. For example, the algorithmic information content of a long bit string can readily be shown to be less than or equal to some value. But for any such value there is no way of excluding the possibility that the AIC could be lower, reduced by some as yet undiscovered theorem revealing a hidden regularity in the string. A bit string that is incompressible has no such regularities and is defined as "random." A random bit string has maximal AIC for its length, since the shortest program that will cause the standard computer to print it out and then halt is just the one that says PRINT followed by the string.
This property of AIC, which leads to its being called, on occasion,"algorithmic randomness," reveals the unsuitability of the quantity as a measure of complexity, since the works of Shakespeare have a lower AIC than random gibberish of the same length that would typically be typed by the proverbial roomful of monkeys.
(http://complexity.martinsewell.com/Gell95.pdf)
Computational complexity is wonderfully applicable to algorithms and whether an algorithm is in polynomial time. Of course, the applicability of “complexity” to formal languages, organized systems designed for computation, and measures of complexity relating to formally defined procedures and problems may appear (and are, in a sense) widely-applicable within the ranges of fields in computer science. What I meant by 4) is pretty much what you described. You described a lot of ways in which complexity theory as it is used in computer and computational sciences (although in the latter, it is not by any means the only measure), but they are pretty restrictive in that complexity theory has been used in everything from healthcare cellular biophysics. I’m sure you’ve read Farjudian’s (2011) “On the Kolmogorov complexity of continuous real functions”, so you know how perhaps the foundational work on computational complexity is superior when extended to non-computable functions. But you may not know how much Shannon and von Neumann were essential here (at least the former).
You’ve also read more than I, I’m sure, of works like Nies’ Computability and Randomness and how thoroughly limited to the nature of what is essentially defined in opposition to complexity (algorithms) to the complexity of things that are not complex. It is, however, easier to classify complexity using such an approach. I am away from my sources at the moment so alas, I can only operate from memory here. But I would be more than happy to provide some actual substance to a respond and to learn from your corrections of these and your expertise.
Sorry for the delayed more substantive reply, and thanks again for lending your expertise. In fact, I was hoping I might tap into a particular aspect of it that you mentioned- the history of computer science. Most of what I know of it comes from literature (all the way back to undergrad textbooks) on the history cognitive science and/or logic. By way of background to my query:
The history of complexity theory is, as seems only befitting, complex. N. Thrift, after pointing to the standard names, identifies "the key moments of complexity theory" as consisting of "chaos, attractors, non-linearity, emergent orders, life at the edge of chaos". You yourself mention Kolmogorov complexity and acknowledge the influence of those like Shannon. You then state that these are "all areas of study in Theory of Computation, and relate to computational complexity theory". I don't disagree that they are areas of study in computability theory and CCT, but I find it curious that you seem to imply (correct me if I'm wrong) that they are primarily aspects of study in CCT rather than as much an area of study in e.g., physics. After all-
1) Although the background to Shannon's work was communication theory, it is most often associated with information theory.
2) It is based upon a concept from physics (entropy), which came back into the limelight with quantum information theory.
3) While it seems obvious that the use of qubits makes quantum information theory more a product of computability theory than information theory, we need only look to Shannon's 1948 paper in the Bell System Technical Journal: "The choice of a logarithmic base corresponds to the choice of a unit for measuring information. If the base 2 is used the resulting units may be called binary digits, or more briefly bits, a word suggested by J. W. Tukey".
4) What is it that makes Kolmogorov complexity part of CCT? "The term “complexity” has different meanings in different contexts. Computational complexity measures how much time or space is needed to perform some computational task. On the other hand, the complexity of description (called also Kolmogorov complexity) is the minimal number of information bits needed to define (describe) a given object...As it was common to him, Kolmogorov published, in 1965, a short note that started a new line of research. Aside from the formal definition of complexity, he has also suggested to use this notion in the foundations of probability theory." (Durand & Zvonkin's "Kolmogorov complexity" in the volume Kolmogorov's Heritage in Mathematics). Here, Kolmogorov complexity is distinguished from computational complexity, and in an earlier, similar volume (The Kolmogorov Legacy in Physics), we find (what you no doubt know) "in real (computer) life we work only with computable numbers, which have zero Kolmogorov complexity". That quote comes from the volumes first paper: "Kolmogorov Pathways from Integrability to Chaos and Beyond" by Livi, Ruffo, & Shepelyansky. The first paper in the next section also concerns chaos: "Kolmogorov’s Legacy about Entropy, Chaos, and Complexity" Falcioni, Loreto, & Vulpiani. "Deterministic chaos", after all, owes much to Kolmogorov.
5) I believe that Gödel's work was also foundational for CCT, paralleled perhaps only by Turing himself. Yet Turing's famous paper on the Entscheidungsproblem, like Gödel's famous proof, were in the context formalizing mathematics that Frege began (although, as Russell lamented, Leibniz "would have been the founder of mathematical logic, which would have become known a century and a half sooner than it did in fact").
Granted that contributions originally made in other field or other contexts have become associated with (often primarily) computer science in general and computational complexity specifically, this is also true of Newton, Leibniz, Cantor, Boole, Peano, Babbage Frege, Russell, Whitehead, etc. The term "algorithm" dates to the 17th century and as early as 1811 was used precisely in the sense it is today.
I suppose I really have 3 questions. First, one can (and this has been done both in popular and more technical texts) trace computing back before even Leibniz, but computability theory today is no more the computability of the 19th century than is the use of the word "algorithm" in the early 1800s. So why categorize, say, Shannon's work as BEING, rather than contributing to and/or useful to consider from the perspective of the "theory of computing"?
Second, why is it that we can consider the work of Shannon, von Neumann, Kolmogorov, and so many others as "areas of study in Theory of Computation, and relate to computational complexity theory, but some find their applications in Engineering, Physics, or even Biology" rather than "areas of study" in what probability theory, information theory, etc., that find application in "theory of computation" along with other fields (such as those you mention), given that information theory, from Shannon & von Neumann to Deutsch & Landauer?
Finally, consider the following:
"Complexity theory, however, may be traced back to conceptual antecedents such as the "philosophy of organism" (Whitehead, 1925), neural networks (McCulloch and Pitts, 1943), cypernetics (Wiener, 1961), and cellular automata (von Neumann, 1966). Complexity also owes much to systems theory given shared foci of anti-recutionism and holistic appreciation of system interconnectedness (von Bertalanffy, 1968)."
What is it about "complexity theory" that makes it, from a historical point of view or from a modern research perspective, part of computer science and makes whatever you mean by "chaos theory" something quite different (given the number of papers on chaos in Studies in Fuzziness and Soft Computing alone)?